00:00:00.001 Started by upstream project "autotest-per-patch" build number 132463 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.299 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.299 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.579 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.589 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.602 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.602 > git config core.sparsecheckout # timeout=10 00:00:07.613 > git read-tree -mu HEAD # timeout=10 00:00:07.628 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.651 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.651 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.737 [Pipeline] Start of Pipeline 00:00:07.751 [Pipeline] library 00:00:07.753 Loading library shm_lib@master 00:00:07.753 Library shm_lib@master is cached. Copying from home. 00:00:07.769 [Pipeline] node 00:00:07.779 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.781 [Pipeline] { 00:00:07.791 [Pipeline] catchError 00:00:07.792 [Pipeline] { 00:00:07.804 [Pipeline] wrap 00:00:07.814 [Pipeline] { 00:00:07.820 [Pipeline] stage 00:00:07.821 [Pipeline] { (Prologue) 00:00:07.839 [Pipeline] echo 00:00:07.840 Node: VM-host-SM16 00:00:07.846 [Pipeline] cleanWs 00:00:07.855 [WS-CLEANUP] Deleting project workspace... 00:00:07.855 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.860 [WS-CLEANUP] done 00:00:08.050 [Pipeline] setCustomBuildProperty 00:00:08.141 [Pipeline] httpRequest 00:00:08.490 [Pipeline] echo 00:00:08.492 Sorcerer 10.211.164.20 is alive 00:00:08.502 [Pipeline] retry 00:00:08.505 [Pipeline] { 00:00:08.518 [Pipeline] httpRequest 00:00:08.522 HttpMethod: GET 00:00:08.522 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.523 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.536 Response Code: HTTP/1.1 200 OK 00:00:08.536 Success: Status code 200 is in the accepted range: 200,404 00:00:08.537 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.699 [Pipeline] } 00:00:13.717 [Pipeline] // retry 00:00:13.724 [Pipeline] sh 00:00:14.005 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.022 [Pipeline] httpRequest 00:00:14.319 [Pipeline] echo 00:00:14.321 Sorcerer 10.211.164.20 is alive 00:00:14.330 [Pipeline] retry 00:00:14.332 [Pipeline] { 00:00:14.346 [Pipeline] httpRequest 00:00:14.350 HttpMethod: GET 00:00:14.351 URL: http://10.211.164.20/packages/spdk_1e70ad0e1011fe1abca7402562869748d0ce2887.tar.gz 00:00:14.351 Sending request to url: http://10.211.164.20/packages/spdk_1e70ad0e1011fe1abca7402562869748d0ce2887.tar.gz 00:00:14.357 Response Code: HTTP/1.1 200 OK 00:00:14.357 Success: Status code 200 is in the accepted range: 200,404 00:00:14.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_1e70ad0e1011fe1abca7402562869748d0ce2887.tar.gz 00:01:54.142 [Pipeline] } 00:01:54.163 [Pipeline] // retry 00:01:54.171 [Pipeline] sh 00:01:54.453 + tar --no-same-owner -xf spdk_1e70ad0e1011fe1abca7402562869748d0ce2887.tar.gz 00:01:57.796 [Pipeline] sh 00:01:58.076 + git -C spdk log --oneline -n5 00:01:58.076 1e70ad0e1 util: multi-level fd_group nesting 00:01:58.076 09301ca15 util: keep track of nested child fd_groups 00:01:58.076 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:58.076 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:58.076 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:58.094 [Pipeline] writeFile 00:01:58.111 [Pipeline] sh 00:01:58.396 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:58.407 [Pipeline] sh 00:01:58.687 + cat autorun-spdk.conf 00:01:58.687 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.687 SPDK_TEST_NVMF=1 00:01:58.687 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.687 SPDK_TEST_URING=1 00:01:58.687 SPDK_TEST_USDT=1 00:01:58.687 SPDK_RUN_UBSAN=1 00:01:58.687 NET_TYPE=virt 00:01:58.687 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.694 RUN_NIGHTLY=0 00:01:58.696 [Pipeline] } 00:01:58.709 [Pipeline] // stage 00:01:58.724 [Pipeline] stage 00:01:58.726 [Pipeline] { (Run VM) 00:01:58.737 [Pipeline] sh 00:01:59.016 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.016 + echo 'Start stage prepare_nvme.sh' 00:01:59.016 Start stage prepare_nvme.sh 00:01:59.016 + [[ -n 2 ]] 00:01:59.016 + disk_prefix=ex2 00:01:59.016 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:59.016 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:59.016 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:59.016 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.016 ++ SPDK_TEST_NVMF=1 00:01:59.016 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.016 ++ SPDK_TEST_URING=1 00:01:59.016 ++ SPDK_TEST_USDT=1 00:01:59.016 ++ SPDK_RUN_UBSAN=1 00:01:59.016 ++ NET_TYPE=virt 00:01:59.016 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.016 ++ RUN_NIGHTLY=0 00:01:59.016 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:59.016 + nvme_files=() 00:01:59.016 + declare -A nvme_files 00:01:59.016 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.016 + nvme_files['nvme.img']=5G 00:01:59.016 + nvme_files['nvme-cmb.img']=5G 00:01:59.016 + nvme_files['nvme-multi0.img']=4G 00:01:59.016 + nvme_files['nvme-multi1.img']=4G 00:01:59.016 + nvme_files['nvme-multi2.img']=4G 00:01:59.016 + nvme_files['nvme-openstack.img']=8G 00:01:59.016 + nvme_files['nvme-zns.img']=5G 00:01:59.016 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.016 + (( SPDK_TEST_FTL == 1 )) 00:01:59.016 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.017 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.017 + for nvme in "${!nvme_files[@]}" 00:01:59.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:59.017 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.017 + for nvme in "${!nvme_files[@]}" 00:01:59.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:59.017 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.017 + for nvme in "${!nvme_files[@]}" 00:01:59.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:59.276 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:59.276 + for nvme in "${!nvme_files[@]}" 00:01:59.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:59.276 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.276 + for nvme in "${!nvme_files[@]}" 00:01:59.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:59.276 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.276 + for nvme in "${!nvme_files[@]}" 00:01:59.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:59.276 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.276 + for nvme in "${!nvme_files[@]}" 00:01:59.276 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:59.535 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.535 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:59.535 + echo 'End stage prepare_nvme.sh' 00:01:59.535 End stage prepare_nvme.sh 00:01:59.546 [Pipeline] sh 00:01:59.827 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:59.827 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:59.827 00:01:59.827 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:59.827 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:59.827 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:59.827 HELP=0 00:01:59.827 DRY_RUN=0 00:01:59.827 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:59.827 NVME_DISKS_TYPE=nvme,nvme, 00:01:59.827 NVME_AUTO_CREATE=0 00:01:59.827 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:59.827 NVME_CMB=,, 00:01:59.827 NVME_PMR=,, 00:01:59.827 NVME_ZNS=,, 00:01:59.827 NVME_MS=,, 00:01:59.827 NVME_FDP=,, 00:01:59.827 SPDK_VAGRANT_DISTRO=fedora39 00:01:59.827 SPDK_VAGRANT_VMCPU=10 00:01:59.827 SPDK_VAGRANT_VMRAM=12288 00:01:59.827 SPDK_VAGRANT_PROVIDER=libvirt 00:01:59.827 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:59.827 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:59.827 SPDK_OPENSTACK_NETWORK=0 00:01:59.827 VAGRANT_PACKAGE_BOX=0 00:01:59.827 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:59.827 FORCE_DISTRO=true 00:01:59.827 VAGRANT_BOX_VERSION= 00:01:59.827 EXTRA_VAGRANTFILES= 00:01:59.827 NIC_MODEL=e1000 00:01:59.827 00:01:59.827 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:59.827 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:03.112 Bringing machine 'default' up with 'libvirt' provider... 00:02:03.371 ==> default: Creating image (snapshot of base box volume). 00:02:03.629 ==> default: Creating domain with the following settings... 00:02:03.629 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732286417_11b45a5d7dc3535453b0 00:02:03.629 ==> default: -- Domain type: kvm 00:02:03.629 ==> default: -- Cpus: 10 00:02:03.629 ==> default: -- Feature: acpi 00:02:03.629 ==> default: -- Feature: apic 00:02:03.629 ==> default: -- Feature: pae 00:02:03.629 ==> default: -- Memory: 12288M 00:02:03.629 ==> default: -- Memory Backing: hugepages: 00:02:03.629 ==> default: -- Management MAC: 00:02:03.629 ==> default: -- Loader: 00:02:03.629 ==> default: -- Nvram: 00:02:03.629 ==> default: -- Base box: spdk/fedora39 00:02:03.629 ==> default: -- Storage pool: default 00:02:03.629 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732286417_11b45a5d7dc3535453b0.img (20G) 00:02:03.629 ==> default: -- Volume Cache: default 00:02:03.629 ==> default: -- Kernel: 00:02:03.629 ==> default: -- Initrd: 00:02:03.629 ==> default: -- Graphics Type: vnc 00:02:03.629 ==> default: -- Graphics Port: -1 00:02:03.629 ==> default: -- Graphics IP: 127.0.0.1 00:02:03.629 ==> default: -- Graphics Password: Not defined 00:02:03.629 ==> default: -- Video Type: cirrus 00:02:03.629 ==> default: -- Video VRAM: 9216 00:02:03.629 ==> default: -- Sound Type: 00:02:03.629 ==> default: -- Keymap: en-us 00:02:03.629 ==> default: -- TPM Path: 00:02:03.629 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:03.629 ==> default: -- Command line args: 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:03.629 ==> default: -> value=-drive, 00:02:03.629 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:03.629 ==> default: -> value=-drive, 00:02:03.629 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.629 ==> default: -> value=-drive, 00:02:03.629 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.629 ==> default: -> value=-drive, 00:02:03.629 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:03.629 ==> default: -> value=-device, 00:02:03.629 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.629 ==> default: Creating shared folders metadata... 00:02:03.629 ==> default: Starting domain. 00:02:05.534 ==> default: Waiting for domain to get an IP address... 00:02:20.411 ==> default: Waiting for SSH to become available... 00:02:21.817 ==> default: Configuring and enabling network interfaces... 00:02:27.090 default: SSH address: 192.168.121.42:22 00:02:27.090 default: SSH username: vagrant 00:02:27.090 default: SSH auth method: private key 00:02:28.993 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:37.122 ==> default: Mounting SSHFS shared folder... 00:02:38.498 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:38.498 ==> default: Checking Mount.. 00:02:39.876 ==> default: Folder Successfully Mounted! 00:02:39.876 ==> default: Running provisioner: file... 00:02:40.444 default: ~/.gitconfig => .gitconfig 00:02:41.010 00:02:41.010 SUCCESS! 00:02:41.010 00:02:41.010 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:41.010 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:41.010 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:41.010 00:02:41.018 [Pipeline] } 00:02:41.034 [Pipeline] // stage 00:02:41.043 [Pipeline] dir 00:02:41.044 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:41.046 [Pipeline] { 00:02:41.058 [Pipeline] catchError 00:02:41.060 [Pipeline] { 00:02:41.073 [Pipeline] sh 00:02:41.352 + vagrant ssh-config --host vagrant+ 00:02:41.352 sed -ne /^Host/,$p 00:02:41.352 + tee ssh_conf 00:02:45.564 Host vagrant 00:02:45.564 HostName 192.168.121.42 00:02:45.564 User vagrant 00:02:45.564 Port 22 00:02:45.564 UserKnownHostsFile /dev/null 00:02:45.564 StrictHostKeyChecking no 00:02:45.564 PasswordAuthentication no 00:02:45.564 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:45.564 IdentitiesOnly yes 00:02:45.564 LogLevel FATAL 00:02:45.564 ForwardAgent yes 00:02:45.564 ForwardX11 yes 00:02:45.564 00:02:45.578 [Pipeline] withEnv 00:02:45.580 [Pipeline] { 00:02:45.595 [Pipeline] sh 00:02:45.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:45.875 source /etc/os-release 00:02:45.875 [[ -e /image.version ]] && img=$(< /image.version) 00:02:45.875 # Minimal, systemd-like check. 00:02:45.875 if [[ -e /.dockerenv ]]; then 00:02:45.875 # Clear garbage from the node's name: 00:02:45.875 # agt-er_autotest_547-896 -> autotest_547-896 00:02:45.875 # $HOSTNAME is the actual container id 00:02:45.875 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:45.875 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:45.875 # We can assume this is a mount from a host where container is running, 00:02:45.875 # so fetch its hostname to easily identify the target swarm worker. 00:02:45.875 container="$(< /etc/hostname) ($agent)" 00:02:45.875 else 00:02:45.875 # Fallback 00:02:45.875 container=$agent 00:02:45.875 fi 00:02:45.875 fi 00:02:45.875 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:45.875 00:02:46.145 [Pipeline] } 00:02:46.162 [Pipeline] // withEnv 00:02:46.171 [Pipeline] setCustomBuildProperty 00:02:46.187 [Pipeline] stage 00:02:46.190 [Pipeline] { (Tests) 00:02:46.206 [Pipeline] sh 00:02:46.487 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:46.758 [Pipeline] sh 00:02:47.036 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:47.310 [Pipeline] timeout 00:02:47.311 Timeout set to expire in 1 hr 0 min 00:02:47.313 [Pipeline] { 00:02:47.327 [Pipeline] sh 00:02:47.606 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:48.174 HEAD is now at 1e70ad0e1 util: multi-level fd_group nesting 00:02:48.186 [Pipeline] sh 00:02:48.466 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:48.739 [Pipeline] sh 00:02:49.018 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:49.293 [Pipeline] sh 00:02:49.572 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:49.830 ++ readlink -f spdk_repo 00:02:49.830 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:49.830 + [[ -n /home/vagrant/spdk_repo ]] 00:02:49.830 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:49.830 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:49.830 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:49.830 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:49.830 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:49.830 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:49.830 + cd /home/vagrant/spdk_repo 00:02:49.830 + source /etc/os-release 00:02:49.830 ++ NAME='Fedora Linux' 00:02:49.830 ++ VERSION='39 (Cloud Edition)' 00:02:49.830 ++ ID=fedora 00:02:49.830 ++ VERSION_ID=39 00:02:49.830 ++ VERSION_CODENAME= 00:02:49.830 ++ PLATFORM_ID=platform:f39 00:02:49.830 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:49.830 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:49.830 ++ LOGO=fedora-logo-icon 00:02:49.830 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:49.830 ++ HOME_URL=https://fedoraproject.org/ 00:02:49.830 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:49.830 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:49.830 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:49.830 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:49.830 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:49.830 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:49.830 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:49.830 ++ SUPPORT_END=2024-11-12 00:02:49.830 ++ VARIANT='Cloud Edition' 00:02:49.830 ++ VARIANT_ID=cloud 00:02:49.830 + uname -a 00:02:49.830 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:49.830 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:50.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:50.089 Hugepages 00:02:50.089 node hugesize free / total 00:02:50.089 node0 1048576kB 0 / 0 00:02:50.089 node0 2048kB 0 / 0 00:02:50.089 00:02:50.089 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:50.089 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:50.348 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:50.348 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:50.348 + rm -f /tmp/spdk-ld-path 00:02:50.348 + source autorun-spdk.conf 00:02:50.348 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.348 ++ SPDK_TEST_NVMF=1 00:02:50.348 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:50.348 ++ SPDK_TEST_URING=1 00:02:50.348 ++ SPDK_TEST_USDT=1 00:02:50.348 ++ SPDK_RUN_UBSAN=1 00:02:50.348 ++ NET_TYPE=virt 00:02:50.348 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.348 ++ RUN_NIGHTLY=0 00:02:50.348 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:50.348 + [[ -n '' ]] 00:02:50.348 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:50.348 + for M in /var/spdk/build-*-manifest.txt 00:02:50.348 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:50.348 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.348 + for M in /var/spdk/build-*-manifest.txt 00:02:50.348 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:50.348 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.348 + for M in /var/spdk/build-*-manifest.txt 00:02:50.348 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:50.348 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:50.348 ++ uname 00:02:50.348 + [[ Linux == \L\i\n\u\x ]] 00:02:50.348 + sudo dmesg -T 00:02:50.348 + sudo dmesg --clear 00:02:50.348 + dmesg_pid=5364 00:02:50.348 + [[ Fedora Linux == FreeBSD ]] 00:02:50.348 + sudo dmesg -Tw 00:02:50.348 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.348 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.348 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:50.348 + [[ -x /usr/src/fio-static/fio ]] 00:02:50.348 + export FIO_BIN=/usr/src/fio-static/fio 00:02:50.348 + FIO_BIN=/usr/src/fio-static/fio 00:02:50.348 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:50.348 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:50.348 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:50.348 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.348 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.348 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:50.348 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.348 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.348 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.348 14:41:04 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.348 14:41:04 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.348 14:41:04 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.348 14:41:04 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:50.348 14:41:04 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:50.348 14:41:04 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:50.349 14:41:04 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:50.349 14:41:04 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:50.349 14:41:04 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:50.349 14:41:04 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:50.349 14:41:04 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:50.349 14:41:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:50.349 14:41:04 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:50.608 14:41:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:50.608 14:41:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:50.608 14:41:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:50.608 14:41:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:50.608 14:41:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.608 14:41:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.608 14:41:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.608 14:41:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.608 14:41:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.608 14:41:05 -- paths/export.sh@5 -- $ export PATH 00:02:50.608 14:41:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.608 14:41:05 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:50.608 14:41:05 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:50.608 14:41:05 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732286465.XXXXXX 00:02:50.608 14:41:05 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732286465.m1UITz 00:02:50.608 14:41:05 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:50.608 14:41:05 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:50.608 14:41:05 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:50.608 14:41:05 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:50.608 14:41:05 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:50.608 14:41:05 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:50.608 14:41:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:50.608 14:41:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.608 14:41:05 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:50.608 14:41:05 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:50.608 14:41:05 -- pm/common@17 -- $ local monitor 00:02:50.608 14:41:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.608 14:41:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.608 14:41:05 -- pm/common@25 -- $ sleep 1 00:02:50.608 14:41:05 -- pm/common@21 -- $ date +%s 00:02:50.608 14:41:05 -- pm/common@21 -- $ date +%s 00:02:50.608 14:41:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732286465 00:02:50.608 14:41:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732286465 00:02:50.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732286465_collect-cpu-load.pm.log 00:02:50.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732286465_collect-vmstat.pm.log 00:02:51.554 14:41:06 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:51.554 14:41:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:51.554 14:41:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:51.554 14:41:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.554 14:41:06 -- spdk/autobuild.sh@16 -- $ date -u 00:02:51.554 Fri Nov 22 02:41:06 PM UTC 2024 00:02:51.554 14:41:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:51.554 v25.01-pre-221-g1e70ad0e1 00:02:51.554 14:41:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:51.554 14:41:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:51.554 14:41:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:51.554 14:41:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:51.554 14:41:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:51.554 14:41:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.554 ************************************ 00:02:51.554 START TEST ubsan 00:02:51.554 ************************************ 00:02:51.554 using ubsan 00:02:51.554 14:41:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:51.554 00:02:51.554 real 0m0.000s 00:02:51.554 user 0m0.000s 00:02:51.554 sys 0m0.000s 00:02:51.554 14:41:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:51.554 14:41:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.554 ************************************ 00:02:51.554 END TEST ubsan 00:02:51.554 ************************************ 00:02:51.554 14:41:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:51.554 14:41:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.554 14:41:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.554 14:41:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:51.814 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.814 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.073 Using 'verbs' RDMA provider 00:03:07.903 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:20.135 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:20.135 Creating mk/config.mk...done. 00:03:20.135 Creating mk/cc.flags.mk...done. 00:03:20.135 Type 'make' to build. 00:03:20.135 14:41:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:20.135 14:41:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:20.135 14:41:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:20.135 14:41:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.135 ************************************ 00:03:20.135 START TEST make 00:03:20.135 ************************************ 00:03:20.135 14:41:33 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:20.135 make[1]: Nothing to be done for 'all'. 00:03:32.337 The Meson build system 00:03:32.337 Version: 1.5.0 00:03:32.337 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:32.337 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:32.337 Build type: native build 00:03:32.337 Program cat found: YES (/usr/bin/cat) 00:03:32.337 Project name: DPDK 00:03:32.337 Project version: 24.03.0 00:03:32.337 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:32.337 C linker for the host machine: cc ld.bfd 2.40-14 00:03:32.337 Host machine cpu family: x86_64 00:03:32.337 Host machine cpu: x86_64 00:03:32.337 Message: ## Building in Developer Mode ## 00:03:32.337 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:32.337 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:32.337 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:32.337 Program python3 found: YES (/usr/bin/python3) 00:03:32.337 Program cat found: YES (/usr/bin/cat) 00:03:32.337 Compiler for C supports arguments -march=native: YES 00:03:32.337 Checking for size of "void *" : 8 00:03:32.337 Checking for size of "void *" : 8 (cached) 00:03:32.337 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:32.337 Library m found: YES 00:03:32.337 Library numa found: YES 00:03:32.337 Has header "numaif.h" : YES 00:03:32.337 Library fdt found: NO 00:03:32.337 Library execinfo found: NO 00:03:32.337 Has header "execinfo.h" : YES 00:03:32.337 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:32.337 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:32.337 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:32.337 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:32.337 Run-time dependency openssl found: YES 3.1.1 00:03:32.337 Run-time dependency libpcap found: YES 1.10.4 00:03:32.337 Has header "pcap.h" with dependency libpcap: YES 00:03:32.337 Compiler for C supports arguments -Wcast-qual: YES 00:03:32.337 Compiler for C supports arguments -Wdeprecated: YES 00:03:32.337 Compiler for C supports arguments -Wformat: YES 00:03:32.337 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:32.337 Compiler for C supports arguments -Wformat-security: NO 00:03:32.337 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:32.337 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:32.337 Compiler for C supports arguments -Wnested-externs: YES 00:03:32.337 Compiler for C supports arguments -Wold-style-definition: YES 00:03:32.337 Compiler for C supports arguments -Wpointer-arith: YES 00:03:32.337 Compiler for C supports arguments -Wsign-compare: YES 00:03:32.337 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:32.337 Compiler for C supports arguments -Wundef: YES 00:03:32.337 Compiler for C supports arguments -Wwrite-strings: YES 00:03:32.337 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:32.337 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:32.337 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:32.337 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:32.337 Program objdump found: YES (/usr/bin/objdump) 00:03:32.337 Compiler for C supports arguments -mavx512f: YES 00:03:32.337 Checking if "AVX512 checking" compiles: YES 00:03:32.337 Fetching value of define "__SSE4_2__" : 1 00:03:32.337 Fetching value of define "__AES__" : 1 00:03:32.337 Fetching value of define "__AVX__" : 1 00:03:32.337 Fetching value of define "__AVX2__" : 1 00:03:32.337 Fetching value of define "__AVX512BW__" : (undefined) 00:03:32.337 Fetching value of define "__AVX512CD__" : (undefined) 00:03:32.337 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:32.337 Fetching value of define "__AVX512F__" : (undefined) 00:03:32.337 Fetching value of define "__AVX512VL__" : (undefined) 00:03:32.337 Fetching value of define "__PCLMUL__" : 1 00:03:32.337 Fetching value of define "__RDRND__" : 1 00:03:32.337 Fetching value of define "__RDSEED__" : 1 00:03:32.337 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:32.337 Fetching value of define "__znver1__" : (undefined) 00:03:32.337 Fetching value of define "__znver2__" : (undefined) 00:03:32.337 Fetching value of define "__znver3__" : (undefined) 00:03:32.337 Fetching value of define "__znver4__" : (undefined) 00:03:32.337 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:32.337 Message: lib/log: Defining dependency "log" 00:03:32.337 Message: lib/kvargs: Defining dependency "kvargs" 00:03:32.337 Message: lib/telemetry: Defining dependency "telemetry" 00:03:32.337 Checking for function "getentropy" : NO 00:03:32.337 Message: lib/eal: Defining dependency "eal" 00:03:32.337 Message: lib/ring: Defining dependency "ring" 00:03:32.337 Message: lib/rcu: Defining dependency "rcu" 00:03:32.337 Message: lib/mempool: Defining dependency "mempool" 00:03:32.337 Message: lib/mbuf: Defining dependency "mbuf" 00:03:32.337 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:32.337 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:32.337 Compiler for C supports arguments -mpclmul: YES 00:03:32.338 Compiler for C supports arguments -maes: YES 00:03:32.338 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:32.338 Compiler for C supports arguments -mavx512bw: YES 00:03:32.338 Compiler for C supports arguments -mavx512dq: YES 00:03:32.338 Compiler for C supports arguments -mavx512vl: YES 00:03:32.338 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:32.338 Compiler for C supports arguments -mavx2: YES 00:03:32.338 Compiler for C supports arguments -mavx: YES 00:03:32.338 Message: lib/net: Defining dependency "net" 00:03:32.338 Message: lib/meter: Defining dependency "meter" 00:03:32.338 Message: lib/ethdev: Defining dependency "ethdev" 00:03:32.338 Message: lib/pci: Defining dependency "pci" 00:03:32.338 Message: lib/cmdline: Defining dependency "cmdline" 00:03:32.338 Message: lib/hash: Defining dependency "hash" 00:03:32.338 Message: lib/timer: Defining dependency "timer" 00:03:32.338 Message: lib/compressdev: Defining dependency "compressdev" 00:03:32.338 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:32.338 Message: lib/dmadev: Defining dependency "dmadev" 00:03:32.338 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:32.338 Message: lib/power: Defining dependency "power" 00:03:32.338 Message: lib/reorder: Defining dependency "reorder" 00:03:32.338 Message: lib/security: Defining dependency "security" 00:03:32.338 Has header "linux/userfaultfd.h" : YES 00:03:32.338 Has header "linux/vduse.h" : YES 00:03:32.338 Message: lib/vhost: Defining dependency "vhost" 00:03:32.338 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:32.338 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:32.338 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:32.338 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:32.338 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:32.338 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:32.338 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:32.338 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:32.338 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:32.338 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:32.338 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:32.338 Configuring doxy-api-html.conf using configuration 00:03:32.338 Configuring doxy-api-man.conf using configuration 00:03:32.338 Program mandb found: YES (/usr/bin/mandb) 00:03:32.338 Program sphinx-build found: NO 00:03:32.338 Configuring rte_build_config.h using configuration 00:03:32.338 Message: 00:03:32.338 ================= 00:03:32.338 Applications Enabled 00:03:32.338 ================= 00:03:32.338 00:03:32.338 apps: 00:03:32.338 00:03:32.338 00:03:32.338 Message: 00:03:32.338 ================= 00:03:32.338 Libraries Enabled 00:03:32.338 ================= 00:03:32.338 00:03:32.338 libs: 00:03:32.338 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:32.338 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:32.338 cryptodev, dmadev, power, reorder, security, vhost, 00:03:32.338 00:03:32.338 Message: 00:03:32.338 =============== 00:03:32.338 Drivers Enabled 00:03:32.338 =============== 00:03:32.338 00:03:32.338 common: 00:03:32.338 00:03:32.338 bus: 00:03:32.338 pci, vdev, 00:03:32.338 mempool: 00:03:32.338 ring, 00:03:32.338 dma: 00:03:32.338 00:03:32.338 net: 00:03:32.338 00:03:32.338 crypto: 00:03:32.338 00:03:32.338 compress: 00:03:32.338 00:03:32.338 vdpa: 00:03:32.338 00:03:32.338 00:03:32.338 Message: 00:03:32.338 ================= 00:03:32.338 Content Skipped 00:03:32.338 ================= 00:03:32.338 00:03:32.338 apps: 00:03:32.338 dumpcap: explicitly disabled via build config 00:03:32.338 graph: explicitly disabled via build config 00:03:32.338 pdump: explicitly disabled via build config 00:03:32.338 proc-info: explicitly disabled via build config 00:03:32.338 test-acl: explicitly disabled via build config 00:03:32.338 test-bbdev: explicitly disabled via build config 00:03:32.338 test-cmdline: explicitly disabled via build config 00:03:32.338 test-compress-perf: explicitly disabled via build config 00:03:32.338 test-crypto-perf: explicitly disabled via build config 00:03:32.338 test-dma-perf: explicitly disabled via build config 00:03:32.338 test-eventdev: explicitly disabled via build config 00:03:32.338 test-fib: explicitly disabled via build config 00:03:32.338 test-flow-perf: explicitly disabled via build config 00:03:32.338 test-gpudev: explicitly disabled via build config 00:03:32.338 test-mldev: explicitly disabled via build config 00:03:32.338 test-pipeline: explicitly disabled via build config 00:03:32.338 test-pmd: explicitly disabled via build config 00:03:32.338 test-regex: explicitly disabled via build config 00:03:32.338 test-sad: explicitly disabled via build config 00:03:32.338 test-security-perf: explicitly disabled via build config 00:03:32.338 00:03:32.338 libs: 00:03:32.338 argparse: explicitly disabled via build config 00:03:32.338 metrics: explicitly disabled via build config 00:03:32.338 acl: explicitly disabled via build config 00:03:32.338 bbdev: explicitly disabled via build config 00:03:32.338 bitratestats: explicitly disabled via build config 00:03:32.338 bpf: explicitly disabled via build config 00:03:32.338 cfgfile: explicitly disabled via build config 00:03:32.338 distributor: explicitly disabled via build config 00:03:32.338 efd: explicitly disabled via build config 00:03:32.338 eventdev: explicitly disabled via build config 00:03:32.338 dispatcher: explicitly disabled via build config 00:03:32.338 gpudev: explicitly disabled via build config 00:03:32.338 gro: explicitly disabled via build config 00:03:32.338 gso: explicitly disabled via build config 00:03:32.338 ip_frag: explicitly disabled via build config 00:03:32.338 jobstats: explicitly disabled via build config 00:03:32.338 latencystats: explicitly disabled via build config 00:03:32.338 lpm: explicitly disabled via build config 00:03:32.338 member: explicitly disabled via build config 00:03:32.338 pcapng: explicitly disabled via build config 00:03:32.338 rawdev: explicitly disabled via build config 00:03:32.338 regexdev: explicitly disabled via build config 00:03:32.338 mldev: explicitly disabled via build config 00:03:32.338 rib: explicitly disabled via build config 00:03:32.338 sched: explicitly disabled via build config 00:03:32.338 stack: explicitly disabled via build config 00:03:32.338 ipsec: explicitly disabled via build config 00:03:32.338 pdcp: explicitly disabled via build config 00:03:32.338 fib: explicitly disabled via build config 00:03:32.338 port: explicitly disabled via build config 00:03:32.338 pdump: explicitly disabled via build config 00:03:32.338 table: explicitly disabled via build config 00:03:32.338 pipeline: explicitly disabled via build config 00:03:32.338 graph: explicitly disabled via build config 00:03:32.338 node: explicitly disabled via build config 00:03:32.338 00:03:32.338 drivers: 00:03:32.338 common/cpt: not in enabled drivers build config 00:03:32.338 common/dpaax: not in enabled drivers build config 00:03:32.338 common/iavf: not in enabled drivers build config 00:03:32.338 common/idpf: not in enabled drivers build config 00:03:32.338 common/ionic: not in enabled drivers build config 00:03:32.338 common/mvep: not in enabled drivers build config 00:03:32.338 common/octeontx: not in enabled drivers build config 00:03:32.338 bus/auxiliary: not in enabled drivers build config 00:03:32.338 bus/cdx: not in enabled drivers build config 00:03:32.338 bus/dpaa: not in enabled drivers build config 00:03:32.338 bus/fslmc: not in enabled drivers build config 00:03:32.338 bus/ifpga: not in enabled drivers build config 00:03:32.338 bus/platform: not in enabled drivers build config 00:03:32.338 bus/uacce: not in enabled drivers build config 00:03:32.338 bus/vmbus: not in enabled drivers build config 00:03:32.338 common/cnxk: not in enabled drivers build config 00:03:32.338 common/mlx5: not in enabled drivers build config 00:03:32.338 common/nfp: not in enabled drivers build config 00:03:32.338 common/nitrox: not in enabled drivers build config 00:03:32.338 common/qat: not in enabled drivers build config 00:03:32.338 common/sfc_efx: not in enabled drivers build config 00:03:32.338 mempool/bucket: not in enabled drivers build config 00:03:32.338 mempool/cnxk: not in enabled drivers build config 00:03:32.338 mempool/dpaa: not in enabled drivers build config 00:03:32.338 mempool/dpaa2: not in enabled drivers build config 00:03:32.338 mempool/octeontx: not in enabled drivers build config 00:03:32.338 mempool/stack: not in enabled drivers build config 00:03:32.338 dma/cnxk: not in enabled drivers build config 00:03:32.338 dma/dpaa: not in enabled drivers build config 00:03:32.338 dma/dpaa2: not in enabled drivers build config 00:03:32.338 dma/hisilicon: not in enabled drivers build config 00:03:32.338 dma/idxd: not in enabled drivers build config 00:03:32.338 dma/ioat: not in enabled drivers build config 00:03:32.338 dma/skeleton: not in enabled drivers build config 00:03:32.338 net/af_packet: not in enabled drivers build config 00:03:32.338 net/af_xdp: not in enabled drivers build config 00:03:32.338 net/ark: not in enabled drivers build config 00:03:32.338 net/atlantic: not in enabled drivers build config 00:03:32.338 net/avp: not in enabled drivers build config 00:03:32.338 net/axgbe: not in enabled drivers build config 00:03:32.338 net/bnx2x: not in enabled drivers build config 00:03:32.338 net/bnxt: not in enabled drivers build config 00:03:32.338 net/bonding: not in enabled drivers build config 00:03:32.338 net/cnxk: not in enabled drivers build config 00:03:32.338 net/cpfl: not in enabled drivers build config 00:03:32.338 net/cxgbe: not in enabled drivers build config 00:03:32.338 net/dpaa: not in enabled drivers build config 00:03:32.338 net/dpaa2: not in enabled drivers build config 00:03:32.339 net/e1000: not in enabled drivers build config 00:03:32.339 net/ena: not in enabled drivers build config 00:03:32.339 net/enetc: not in enabled drivers build config 00:03:32.339 net/enetfec: not in enabled drivers build config 00:03:32.339 net/enic: not in enabled drivers build config 00:03:32.339 net/failsafe: not in enabled drivers build config 00:03:32.339 net/fm10k: not in enabled drivers build config 00:03:32.339 net/gve: not in enabled drivers build config 00:03:32.339 net/hinic: not in enabled drivers build config 00:03:32.339 net/hns3: not in enabled drivers build config 00:03:32.339 net/i40e: not in enabled drivers build config 00:03:32.339 net/iavf: not in enabled drivers build config 00:03:32.339 net/ice: not in enabled drivers build config 00:03:32.339 net/idpf: not in enabled drivers build config 00:03:32.339 net/igc: not in enabled drivers build config 00:03:32.339 net/ionic: not in enabled drivers build config 00:03:32.339 net/ipn3ke: not in enabled drivers build config 00:03:32.339 net/ixgbe: not in enabled drivers build config 00:03:32.339 net/mana: not in enabled drivers build config 00:03:32.339 net/memif: not in enabled drivers build config 00:03:32.339 net/mlx4: not in enabled drivers build config 00:03:32.339 net/mlx5: not in enabled drivers build config 00:03:32.339 net/mvneta: not in enabled drivers build config 00:03:32.339 net/mvpp2: not in enabled drivers build config 00:03:32.339 net/netvsc: not in enabled drivers build config 00:03:32.339 net/nfb: not in enabled drivers build config 00:03:32.339 net/nfp: not in enabled drivers build config 00:03:32.339 net/ngbe: not in enabled drivers build config 00:03:32.339 net/null: not in enabled drivers build config 00:03:32.339 net/octeontx: not in enabled drivers build config 00:03:32.339 net/octeon_ep: not in enabled drivers build config 00:03:32.339 net/pcap: not in enabled drivers build config 00:03:32.339 net/pfe: not in enabled drivers build config 00:03:32.339 net/qede: not in enabled drivers build config 00:03:32.339 net/ring: not in enabled drivers build config 00:03:32.339 net/sfc: not in enabled drivers build config 00:03:32.339 net/softnic: not in enabled drivers build config 00:03:32.339 net/tap: not in enabled drivers build config 00:03:32.339 net/thunderx: not in enabled drivers build config 00:03:32.339 net/txgbe: not in enabled drivers build config 00:03:32.339 net/vdev_netvsc: not in enabled drivers build config 00:03:32.339 net/vhost: not in enabled drivers build config 00:03:32.339 net/virtio: not in enabled drivers build config 00:03:32.339 net/vmxnet3: not in enabled drivers build config 00:03:32.339 raw/*: missing internal dependency, "rawdev" 00:03:32.339 crypto/armv8: not in enabled drivers build config 00:03:32.339 crypto/bcmfs: not in enabled drivers build config 00:03:32.339 crypto/caam_jr: not in enabled drivers build config 00:03:32.339 crypto/ccp: not in enabled drivers build config 00:03:32.339 crypto/cnxk: not in enabled drivers build config 00:03:32.339 crypto/dpaa_sec: not in enabled drivers build config 00:03:32.339 crypto/dpaa2_sec: not in enabled drivers build config 00:03:32.339 crypto/ipsec_mb: not in enabled drivers build config 00:03:32.339 crypto/mlx5: not in enabled drivers build config 00:03:32.339 crypto/mvsam: not in enabled drivers build config 00:03:32.339 crypto/nitrox: not in enabled drivers build config 00:03:32.339 crypto/null: not in enabled drivers build config 00:03:32.339 crypto/octeontx: not in enabled drivers build config 00:03:32.339 crypto/openssl: not in enabled drivers build config 00:03:32.339 crypto/scheduler: not in enabled drivers build config 00:03:32.339 crypto/uadk: not in enabled drivers build config 00:03:32.339 crypto/virtio: not in enabled drivers build config 00:03:32.339 compress/isal: not in enabled drivers build config 00:03:32.339 compress/mlx5: not in enabled drivers build config 00:03:32.339 compress/nitrox: not in enabled drivers build config 00:03:32.339 compress/octeontx: not in enabled drivers build config 00:03:32.339 compress/zlib: not in enabled drivers build config 00:03:32.339 regex/*: missing internal dependency, "regexdev" 00:03:32.339 ml/*: missing internal dependency, "mldev" 00:03:32.339 vdpa/ifc: not in enabled drivers build config 00:03:32.339 vdpa/mlx5: not in enabled drivers build config 00:03:32.339 vdpa/nfp: not in enabled drivers build config 00:03:32.339 vdpa/sfc: not in enabled drivers build config 00:03:32.339 event/*: missing internal dependency, "eventdev" 00:03:32.339 baseband/*: missing internal dependency, "bbdev" 00:03:32.339 gpu/*: missing internal dependency, "gpudev" 00:03:32.339 00:03:32.339 00:03:32.339 Build targets in project: 85 00:03:32.339 00:03:32.339 DPDK 24.03.0 00:03:32.339 00:03:32.339 User defined options 00:03:32.339 buildtype : debug 00:03:32.339 default_library : shared 00:03:32.339 libdir : lib 00:03:32.339 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:32.339 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:32.339 c_link_args : 00:03:32.339 cpu_instruction_set: native 00:03:32.339 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:32.339 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:32.339 enable_docs : false 00:03:32.339 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:32.339 enable_kmods : false 00:03:32.339 max_lcores : 128 00:03:32.339 tests : false 00:03:32.339 00:03:32.339 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:32.339 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:32.598 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:32.598 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:32.598 [3/268] Linking static target lib/librte_kvargs.a 00:03:32.598 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:32.598 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:32.598 [6/268] Linking static target lib/librte_log.a 00:03:33.219 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.219 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:33.219 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:33.219 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:33.219 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:33.478 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:33.478 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:33.478 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:33.478 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:33.736 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:33.736 [17/268] Linking static target lib/librte_telemetry.a 00:03:33.736 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:33.736 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.736 [20/268] Linking target lib/librte_log.so.24.1 00:03:33.994 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:34.253 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:34.253 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:34.253 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:34.253 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:34.511 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:34.511 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:34.511 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:34.511 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:34.511 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:34.511 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:34.511 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:34.511 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.769 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:35.026 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:35.026 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:35.026 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:35.285 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:35.285 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:35.285 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:35.546 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:35.546 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:35.546 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:35.546 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:35.546 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:35.546 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:35.546 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:35.807 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:36.066 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:36.066 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:36.066 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:36.324 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:36.324 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:36.583 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:36.583 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:36.583 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:36.583 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:36.583 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:36.841 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:36.841 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:36.841 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:37.099 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:37.099 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:37.358 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:37.358 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:37.358 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:37.358 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:37.617 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:37.617 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:37.617 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:37.617 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:37.875 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:37.875 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:37.875 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:37.875 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:38.134 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:38.134 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:38.134 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:38.134 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:38.393 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:38.393 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:38.393 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:38.653 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:38.653 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:38.653 [85/268] Linking static target lib/librte_eal.a 00:03:38.911 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:38.911 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:38.911 [88/268] Linking static target lib/librte_ring.a 00:03:38.912 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:38.912 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:38.912 [91/268] Linking static target lib/librte_rcu.a 00:03:39.288 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:39.288 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:39.288 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:39.288 [95/268] Linking static target lib/librte_mempool.a 00:03:39.288 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:39.288 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:39.596 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.596 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:39.596 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.854 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:39.854 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:39.854 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:39.854 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:39.854 [105/268] Linking static target lib/librte_mbuf.a 00:03:40.112 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:40.112 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:40.112 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:40.112 [109/268] Linking static target lib/librte_net.a 00:03:40.370 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:40.370 [111/268] Linking static target lib/librte_meter.a 00:03:40.370 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.629 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:40.629 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.629 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:40.629 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:40.629 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.629 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:41.195 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.452 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:41.452 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:41.452 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:41.452 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:42.019 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:42.019 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:42.019 [126/268] Linking static target lib/librte_pci.a 00:03:42.019 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:42.019 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:42.019 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:42.278 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:42.278 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:42.278 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:42.278 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:42.278 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:42.278 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:42.278 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:42.278 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:42.278 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:42.537 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.537 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:42.537 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:42.537 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:42.537 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:42.537 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:42.537 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:42.795 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:42.795 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:42.795 [148/268] Linking static target lib/librte_ethdev.a 00:03:42.795 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:42.795 [150/268] Linking static target lib/librte_cmdline.a 00:03:43.053 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:43.311 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:43.311 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:43.311 [154/268] Linking static target lib/librte_timer.a 00:03:43.311 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:43.311 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:43.569 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:43.569 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:43.569 [159/268] Linking static target lib/librte_hash.a 00:03:43.827 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:43.827 [161/268] Linking static target lib/librte_compressdev.a 00:03:44.084 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:44.085 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.085 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:44.085 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:44.342 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:44.342 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:44.600 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:44.600 [169/268] Linking static target lib/librte_dmadev.a 00:03:44.600 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:44.600 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:44.600 [172/268] Linking static target lib/librte_cryptodev.a 00:03:44.600 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.858 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.858 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:44.858 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:44.858 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.858 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:45.425 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:45.425 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:45.425 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:45.425 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:45.425 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.425 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:45.684 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:45.684 [186/268] Linking static target lib/librte_power.a 00:03:45.943 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:45.943 [188/268] Linking static target lib/librte_reorder.a 00:03:46.201 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:46.201 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:46.201 [191/268] Linking static target lib/librte_security.a 00:03:46.201 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:46.201 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:46.460 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.719 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:46.976 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.976 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.234 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:47.234 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:47.234 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:47.234 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.492 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:47.492 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:47.750 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:47.750 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:48.009 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:48.009 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:48.009 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:48.009 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:48.009 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:48.268 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:48.268 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:48.268 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:48.268 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:48.268 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:48.268 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:48.268 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:48.268 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:48.526 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:48.526 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:48.526 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:48.526 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:48.526 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.783 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:48.783 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:48.783 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:48.783 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:49.040 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.607 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:49.607 [230/268] Linking static target lib/librte_vhost.a 00:03:50.541 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.541 [232/268] Linking target lib/librte_eal.so.24.1 00:03:50.541 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:50.541 [234/268] Linking target lib/librte_meter.so.24.1 00:03:50.541 [235/268] Linking target lib/librte_timer.so.24.1 00:03:50.541 [236/268] Linking target lib/librte_ring.so.24.1 00:03:50.541 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:50.541 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:50.541 [239/268] Linking target lib/librte_pci.so.24.1 00:03:50.808 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:50.808 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:50.808 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:50.808 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:50.808 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:50.808 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:50.808 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:50.808 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:51.105 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:51.105 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:51.105 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:51.105 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:51.105 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.105 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.105 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:51.105 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:51.105 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:51.105 [257/268] Linking target lib/librte_net.so.24.1 00:03:51.105 [258/268] Linking target lib/librte_reorder.so.24.1 00:03:51.366 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:51.366 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:51.366 [261/268] Linking target lib/librte_hash.so.24.1 00:03:51.366 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:51.366 [263/268] Linking target lib/librte_security.so.24.1 00:03:51.366 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:51.625 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:51.625 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:51.625 [267/268] Linking target lib/librte_power.so.24.1 00:03:51.625 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:51.625 INFO: autodetecting backend as ninja 00:03:51.625 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:18.166 CC lib/ut_mock/mock.o 00:04:18.166 CC lib/log/log_flags.o 00:04:18.166 CC lib/log/log.o 00:04:18.166 CC lib/log/log_deprecated.o 00:04:18.166 CC lib/ut/ut.o 00:04:18.166 LIB libspdk_log.a 00:04:18.166 LIB libspdk_ut_mock.a 00:04:18.166 LIB libspdk_ut.a 00:04:18.166 SO libspdk_ut_mock.so.6.0 00:04:18.166 SO libspdk_ut.so.2.0 00:04:18.166 SO libspdk_log.so.7.1 00:04:18.166 SYMLINK libspdk_log.so 00:04:18.166 SYMLINK libspdk_ut_mock.so 00:04:18.166 SYMLINK libspdk_ut.so 00:04:18.166 CC lib/dma/dma.o 00:04:18.166 CC lib/ioat/ioat.o 00:04:18.166 CC lib/util/bit_array.o 00:04:18.166 CC lib/util/cpuset.o 00:04:18.166 CXX lib/trace_parser/trace.o 00:04:18.166 CC lib/util/base64.o 00:04:18.166 CC lib/util/crc16.o 00:04:18.166 CC lib/util/crc32c.o 00:04:18.166 CC lib/util/crc32.o 00:04:18.166 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.166 CC lib/vfio_user/host/vfio_user.o 00:04:18.166 CC lib/util/crc32_ieee.o 00:04:18.166 CC lib/util/crc64.o 00:04:18.166 LIB libspdk_dma.a 00:04:18.166 CC lib/util/dif.o 00:04:18.166 CC lib/util/fd.o 00:04:18.166 SO libspdk_dma.so.5.0 00:04:18.166 CC lib/util/fd_group.o 00:04:18.166 SYMLINK libspdk_dma.so 00:04:18.166 CC lib/util/file.o 00:04:18.166 LIB libspdk_ioat.a 00:04:18.166 CC lib/util/hexlify.o 00:04:18.166 CC lib/util/iov.o 00:04:18.166 SO libspdk_ioat.so.7.0 00:04:18.166 CC lib/util/math.o 00:04:18.166 CC lib/util/net.o 00:04:18.166 LIB libspdk_vfio_user.a 00:04:18.166 SYMLINK libspdk_ioat.so 00:04:18.166 CC lib/util/pipe.o 00:04:18.166 SO libspdk_vfio_user.so.5.0 00:04:18.166 CC lib/util/strerror_tls.o 00:04:18.166 CC lib/util/string.o 00:04:18.166 SYMLINK libspdk_vfio_user.so 00:04:18.166 CC lib/util/uuid.o 00:04:18.166 CC lib/util/xor.o 00:04:18.166 CC lib/util/zipf.o 00:04:18.166 CC lib/util/md5.o 00:04:18.166 LIB libspdk_util.a 00:04:18.166 SO libspdk_util.so.10.1 00:04:18.166 LIB libspdk_trace_parser.a 00:04:18.166 SYMLINK libspdk_util.so 00:04:18.166 SO libspdk_trace_parser.so.6.0 00:04:18.166 SYMLINK libspdk_trace_parser.so 00:04:18.166 CC lib/conf/conf.o 00:04:18.166 CC lib/rdma_utils/rdma_utils.o 00:04:18.166 CC lib/json/json_parse.o 00:04:18.166 CC lib/json/json_util.o 00:04:18.166 CC lib/idxd/idxd.o 00:04:18.166 CC lib/vmd/vmd.o 00:04:18.166 CC lib/idxd/idxd_user.o 00:04:18.166 CC lib/json/json_write.o 00:04:18.166 CC lib/idxd/idxd_kernel.o 00:04:18.166 CC lib/env_dpdk/env.o 00:04:18.426 CC lib/env_dpdk/memory.o 00:04:18.426 CC lib/vmd/led.o 00:04:18.426 LIB libspdk_conf.a 00:04:18.426 CC lib/env_dpdk/pci.o 00:04:18.426 SO libspdk_conf.so.6.0 00:04:18.426 LIB libspdk_rdma_utils.a 00:04:18.426 CC lib/env_dpdk/init.o 00:04:18.426 SO libspdk_rdma_utils.so.1.0 00:04:18.426 LIB libspdk_json.a 00:04:18.426 SYMLINK libspdk_conf.so 00:04:18.426 CC lib/env_dpdk/threads.o 00:04:18.426 SYMLINK libspdk_rdma_utils.so 00:04:18.426 SO libspdk_json.so.6.0 00:04:18.426 CC lib/env_dpdk/pci_ioat.o 00:04:18.685 CC lib/env_dpdk/pci_virtio.o 00:04:18.685 SYMLINK libspdk_json.so 00:04:18.685 CC lib/env_dpdk/pci_vmd.o 00:04:18.685 CC lib/env_dpdk/pci_idxd.o 00:04:18.685 CC lib/env_dpdk/pci_event.o 00:04:18.685 CC lib/rdma_provider/common.o 00:04:18.685 LIB libspdk_idxd.a 00:04:18.943 LIB libspdk_vmd.a 00:04:18.943 SO libspdk_idxd.so.12.1 00:04:18.943 CC lib/env_dpdk/sigbus_handler.o 00:04:18.943 CC lib/env_dpdk/pci_dpdk.o 00:04:18.943 SO libspdk_vmd.so.6.0 00:04:18.943 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:18.943 SYMLINK libspdk_idxd.so 00:04:18.943 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:18.943 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:18.943 SYMLINK libspdk_vmd.so 00:04:18.943 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:18.943 CC lib/jsonrpc/jsonrpc_server.o 00:04:18.943 CC lib/jsonrpc/jsonrpc_client.o 00:04:18.943 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.202 LIB libspdk_rdma_provider.a 00:04:19.202 SO libspdk_rdma_provider.so.7.0 00:04:19.202 SYMLINK libspdk_rdma_provider.so 00:04:19.202 LIB libspdk_jsonrpc.a 00:04:19.461 SO libspdk_jsonrpc.so.6.0 00:04:19.461 SYMLINK libspdk_jsonrpc.so 00:04:19.720 LIB libspdk_env_dpdk.a 00:04:19.720 SO libspdk_env_dpdk.so.15.1 00:04:19.720 CC lib/rpc/rpc.o 00:04:19.979 SYMLINK libspdk_env_dpdk.so 00:04:19.979 LIB libspdk_rpc.a 00:04:19.979 SO libspdk_rpc.so.6.0 00:04:19.979 SYMLINK libspdk_rpc.so 00:04:20.237 CC lib/keyring/keyring.o 00:04:20.237 CC lib/keyring/keyring_rpc.o 00:04:20.237 CC lib/trace/trace.o 00:04:20.237 CC lib/trace/trace_flags.o 00:04:20.237 CC lib/notify/notify.o 00:04:20.237 CC lib/trace/trace_rpc.o 00:04:20.237 CC lib/notify/notify_rpc.o 00:04:20.496 LIB libspdk_notify.a 00:04:20.496 SO libspdk_notify.so.6.0 00:04:20.496 LIB libspdk_keyring.a 00:04:20.496 SO libspdk_keyring.so.2.0 00:04:20.496 SYMLINK libspdk_notify.so 00:04:20.755 LIB libspdk_trace.a 00:04:20.755 SYMLINK libspdk_keyring.so 00:04:20.755 SO libspdk_trace.so.11.0 00:04:20.755 SYMLINK libspdk_trace.so 00:04:21.013 CC lib/thread/thread.o 00:04:21.013 CC lib/sock/sock.o 00:04:21.013 CC lib/thread/iobuf.o 00:04:21.013 CC lib/sock/sock_rpc.o 00:04:21.581 LIB libspdk_sock.a 00:04:21.581 SO libspdk_sock.so.10.0 00:04:21.581 SYMLINK libspdk_sock.so 00:04:21.839 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.839 CC lib/nvme/nvme_ns_cmd.o 00:04:21.839 CC lib/nvme/nvme_ctrlr.o 00:04:21.839 CC lib/nvme/nvme_fabric.o 00:04:21.839 CC lib/nvme/nvme_ns.o 00:04:21.839 CC lib/nvme/nvme_pcie_common.o 00:04:21.839 CC lib/nvme/nvme_qpair.o 00:04:21.839 CC lib/nvme/nvme_pcie.o 00:04:21.839 CC lib/nvme/nvme.o 00:04:22.773 LIB libspdk_thread.a 00:04:22.773 SO libspdk_thread.so.11.0 00:04:22.773 SYMLINK libspdk_thread.so 00:04:22.773 CC lib/nvme/nvme_quirks.o 00:04:22.773 CC lib/nvme/nvme_transport.o 00:04:22.773 CC lib/nvme/nvme_discovery.o 00:04:22.773 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.773 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.031 CC lib/nvme/nvme_tcp.o 00:04:23.031 CC lib/nvme/nvme_opal.o 00:04:23.031 CC lib/nvme/nvme_io_msg.o 00:04:23.290 CC lib/nvme/nvme_poll_group.o 00:04:23.290 CC lib/nvme/nvme_zns.o 00:04:23.549 CC lib/nvme/nvme_stubs.o 00:04:23.549 CC lib/nvme/nvme_auth.o 00:04:23.549 CC lib/nvme/nvme_cuse.o 00:04:23.549 CC lib/nvme/nvme_rdma.o 00:04:23.844 CC lib/accel/accel.o 00:04:23.844 CC lib/blob/blobstore.o 00:04:23.844 CC lib/blob/request.o 00:04:24.108 CC lib/blob/zeroes.o 00:04:24.108 CC lib/blob/blob_bs_dev.o 00:04:24.108 CC lib/accel/accel_rpc.o 00:04:24.108 CC lib/accel/accel_sw.o 00:04:24.673 CC lib/virtio/virtio.o 00:04:24.673 CC lib/virtio/virtio_vhost_user.o 00:04:24.673 CC lib/virtio/virtio_vfio_user.o 00:04:24.673 CC lib/virtio/virtio_pci.o 00:04:24.673 CC lib/init/json_config.o 00:04:24.674 CC lib/init/subsystem.o 00:04:24.674 CC lib/fsdev/fsdev.o 00:04:24.674 CC lib/init/subsystem_rpc.o 00:04:24.674 CC lib/init/rpc.o 00:04:24.931 CC lib/fsdev/fsdev_io.o 00:04:24.931 CC lib/fsdev/fsdev_rpc.o 00:04:24.931 LIB libspdk_virtio.a 00:04:24.931 LIB libspdk_accel.a 00:04:24.931 SO libspdk_virtio.so.7.0 00:04:24.931 SO libspdk_accel.so.16.0 00:04:24.931 LIB libspdk_init.a 00:04:24.931 SYMLINK libspdk_virtio.so 00:04:24.931 LIB libspdk_nvme.a 00:04:24.931 SO libspdk_init.so.6.0 00:04:24.931 SYMLINK libspdk_accel.so 00:04:25.189 SYMLINK libspdk_init.so 00:04:25.189 SO libspdk_nvme.so.15.0 00:04:25.189 CC lib/bdev/bdev.o 00:04:25.189 CC lib/bdev/bdev_rpc.o 00:04:25.189 CC lib/bdev/bdev_zone.o 00:04:25.189 CC lib/bdev/part.o 00:04:25.189 CC lib/bdev/scsi_nvme.o 00:04:25.189 CC lib/event/app.o 00:04:25.189 CC lib/event/reactor.o 00:04:25.189 LIB libspdk_fsdev.a 00:04:25.448 SO libspdk_fsdev.so.2.0 00:04:25.448 SYMLINK libspdk_fsdev.so 00:04:25.448 CC lib/event/log_rpc.o 00:04:25.448 CC lib/event/app_rpc.o 00:04:25.448 SYMLINK libspdk_nvme.so 00:04:25.448 CC lib/event/scheduler_static.o 00:04:25.706 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:25.706 LIB libspdk_event.a 00:04:25.706 SO libspdk_event.so.14.0 00:04:25.964 SYMLINK libspdk_event.so 00:04:26.223 LIB libspdk_fuse_dispatcher.a 00:04:26.223 SO libspdk_fuse_dispatcher.so.1.0 00:04:26.482 SYMLINK libspdk_fuse_dispatcher.so 00:04:27.049 LIB libspdk_blob.a 00:04:27.049 SO libspdk_blob.so.11.0 00:04:27.049 SYMLINK libspdk_blob.so 00:04:27.307 CC lib/lvol/lvol.o 00:04:27.307 CC lib/blobfs/blobfs.o 00:04:27.307 CC lib/blobfs/tree.o 00:04:28.244 LIB libspdk_bdev.a 00:04:28.244 SO libspdk_bdev.so.17.0 00:04:28.244 SYMLINK libspdk_bdev.so 00:04:28.244 LIB libspdk_blobfs.a 00:04:28.244 SO libspdk_blobfs.so.10.0 00:04:28.502 SYMLINK libspdk_blobfs.so 00:04:28.502 CC lib/scsi/lun.o 00:04:28.502 CC lib/scsi/dev.o 00:04:28.502 CC lib/scsi/scsi.o 00:04:28.502 CC lib/scsi/port.o 00:04:28.502 CC lib/scsi/scsi_bdev.o 00:04:28.502 CC lib/ftl/ftl_core.o 00:04:28.502 CC lib/ublk/ublk.o 00:04:28.502 CC lib/nbd/nbd.o 00:04:28.502 CC lib/nvmf/ctrlr.o 00:04:28.502 LIB libspdk_lvol.a 00:04:28.502 SO libspdk_lvol.so.10.0 00:04:28.502 SYMLINK libspdk_lvol.so 00:04:28.502 CC lib/ublk/ublk_rpc.o 00:04:28.764 CC lib/scsi/scsi_pr.o 00:04:28.764 CC lib/scsi/scsi_rpc.o 00:04:28.764 CC lib/scsi/task.o 00:04:28.764 CC lib/nbd/nbd_rpc.o 00:04:28.764 CC lib/ftl/ftl_init.o 00:04:28.764 CC lib/nvmf/ctrlr_discovery.o 00:04:28.764 CC lib/ftl/ftl_layout.o 00:04:28.764 CC lib/nvmf/ctrlr_bdev.o 00:04:29.025 CC lib/nvmf/subsystem.o 00:04:29.025 LIB libspdk_nbd.a 00:04:29.025 CC lib/ftl/ftl_debug.o 00:04:29.025 SO libspdk_nbd.so.7.0 00:04:29.025 CC lib/ftl/ftl_io.o 00:04:29.025 LIB libspdk_scsi.a 00:04:29.025 SYMLINK libspdk_nbd.so 00:04:29.025 CC lib/ftl/ftl_sb.o 00:04:29.025 SO libspdk_scsi.so.9.0 00:04:29.025 LIB libspdk_ublk.a 00:04:29.025 SO libspdk_ublk.so.3.0 00:04:29.284 SYMLINK libspdk_scsi.so 00:04:29.284 CC lib/ftl/ftl_l2p.o 00:04:29.284 CC lib/ftl/ftl_l2p_flat.o 00:04:29.284 SYMLINK libspdk_ublk.so 00:04:29.284 CC lib/ftl/ftl_nv_cache.o 00:04:29.284 CC lib/nvmf/nvmf.o 00:04:29.284 CC lib/ftl/ftl_band.o 00:04:29.284 CC lib/nvmf/nvmf_rpc.o 00:04:29.543 CC lib/nvmf/transport.o 00:04:29.543 CC lib/ftl/ftl_band_ops.o 00:04:29.543 CC lib/iscsi/conn.o 00:04:29.543 CC lib/nvmf/tcp.o 00:04:29.543 CC lib/nvmf/stubs.o 00:04:29.801 CC lib/nvmf/mdns_server.o 00:04:30.060 CC lib/nvmf/rdma.o 00:04:30.060 CC lib/iscsi/init_grp.o 00:04:30.060 CC lib/iscsi/iscsi.o 00:04:30.060 CC lib/nvmf/auth.o 00:04:30.060 CC lib/ftl/ftl_writer.o 00:04:30.060 CC lib/iscsi/param.o 00:04:30.318 CC lib/iscsi/portal_grp.o 00:04:30.318 CC lib/iscsi/tgt_node.o 00:04:30.318 CC lib/iscsi/iscsi_subsystem.o 00:04:30.318 CC lib/iscsi/iscsi_rpc.o 00:04:30.577 CC lib/ftl/ftl_rq.o 00:04:30.577 CC lib/iscsi/task.o 00:04:30.577 CC lib/ftl/ftl_reloc.o 00:04:30.577 CC lib/vhost/vhost.o 00:04:30.577 CC lib/ftl/ftl_l2p_cache.o 00:04:30.834 CC lib/ftl/ftl_p2l.o 00:04:30.834 CC lib/ftl/ftl_p2l_log.o 00:04:30.834 CC lib/vhost/vhost_rpc.o 00:04:31.092 CC lib/vhost/vhost_scsi.o 00:04:31.092 CC lib/vhost/vhost_blk.o 00:04:31.092 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.092 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.350 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.350 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:31.350 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:31.350 CC lib/vhost/rte_vhost_user.o 00:04:31.350 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:31.608 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:31.608 LIB libspdk_iscsi.a 00:04:31.608 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:31.608 SO libspdk_iscsi.so.8.0 00:04:31.608 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:31.866 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:31.866 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:31.866 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:31.866 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:31.866 SYMLINK libspdk_iscsi.so 00:04:31.866 CC lib/ftl/utils/ftl_conf.o 00:04:31.866 CC lib/ftl/utils/ftl_md.o 00:04:32.124 CC lib/ftl/utils/ftl_mempool.o 00:04:32.124 CC lib/ftl/utils/ftl_bitmap.o 00:04:32.124 CC lib/ftl/utils/ftl_property.o 00:04:32.124 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:32.124 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:32.124 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:32.124 LIB libspdk_nvmf.a 00:04:32.124 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:32.124 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:32.382 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:32.382 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:32.382 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:32.382 SO libspdk_nvmf.so.20.0 00:04:32.382 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:32.382 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:32.382 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:32.382 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:32.382 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:32.382 CC lib/ftl/base/ftl_base_dev.o 00:04:32.382 CC lib/ftl/base/ftl_base_bdev.o 00:04:32.640 SYMLINK libspdk_nvmf.so 00:04:32.640 CC lib/ftl/ftl_trace.o 00:04:32.640 LIB libspdk_vhost.a 00:04:32.640 SO libspdk_vhost.so.8.0 00:04:32.897 LIB libspdk_ftl.a 00:04:32.897 SYMLINK libspdk_vhost.so 00:04:33.154 SO libspdk_ftl.so.9.0 00:04:33.415 SYMLINK libspdk_ftl.so 00:04:33.673 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.930 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.930 CC module/sock/posix/posix.o 00:04:33.930 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.930 CC module/accel/error/accel_error.o 00:04:33.930 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.930 CC module/accel/ioat/accel_ioat.o 00:04:33.930 CC module/fsdev/aio/fsdev_aio.o 00:04:33.930 CC module/blob/bdev/blob_bdev.o 00:04:33.930 CC module/keyring/file/keyring.o 00:04:33.930 LIB libspdk_env_dpdk_rpc.a 00:04:33.930 SO libspdk_env_dpdk_rpc.so.6.0 00:04:33.930 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.930 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:33.930 LIB libspdk_scheduler_gscheduler.a 00:04:33.930 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.930 CC module/keyring/file/keyring_rpc.o 00:04:33.931 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.931 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.931 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.931 LIB libspdk_scheduler_dynamic.a 00:04:33.931 CC module/accel/error/accel_error_rpc.o 00:04:34.188 SO libspdk_scheduler_dynamic.so.4.0 00:04:34.188 SYMLINK libspdk_scheduler_gscheduler.so 00:04:34.188 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:34.188 CC module/fsdev/aio/linux_aio_mgr.o 00:04:34.188 LIB libspdk_blob_bdev.a 00:04:34.188 SYMLINK libspdk_scheduler_dynamic.so 00:04:34.188 LIB libspdk_keyring_file.a 00:04:34.188 SO libspdk_blob_bdev.so.11.0 00:04:34.188 SO libspdk_keyring_file.so.2.0 00:04:34.188 LIB libspdk_accel_ioat.a 00:04:34.188 SO libspdk_accel_ioat.so.6.0 00:04:34.188 LIB libspdk_accel_error.a 00:04:34.188 SYMLINK libspdk_blob_bdev.so 00:04:34.188 SO libspdk_accel_error.so.2.0 00:04:34.188 CC module/accel/iaa/accel_iaa.o 00:04:34.188 CC module/accel/dsa/accel_dsa.o 00:04:34.188 SYMLINK libspdk_keyring_file.so 00:04:34.188 SYMLINK libspdk_accel_ioat.so 00:04:34.188 CC module/accel/iaa/accel_iaa_rpc.o 00:04:34.447 SYMLINK libspdk_accel_error.so 00:04:34.447 CC module/accel/dsa/accel_dsa_rpc.o 00:04:34.447 CC module/sock/uring/uring.o 00:04:34.447 CC module/keyring/linux/keyring.o 00:04:34.447 LIB libspdk_accel_iaa.a 00:04:34.447 LIB libspdk_fsdev_aio.a 00:04:34.710 LIB libspdk_accel_dsa.a 00:04:34.710 SO libspdk_accel_iaa.so.3.0 00:04:34.710 CC module/bdev/delay/vbdev_delay.o 00:04:34.710 SO libspdk_accel_dsa.so.5.0 00:04:34.710 SO libspdk_fsdev_aio.so.1.0 00:04:34.710 LIB libspdk_sock_posix.a 00:04:34.710 CC module/blobfs/bdev/blobfs_bdev.o 00:04:34.710 SYMLINK libspdk_accel_iaa.so 00:04:34.710 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:34.710 SO libspdk_sock_posix.so.6.0 00:04:34.710 CC module/bdev/error/vbdev_error.o 00:04:34.710 SYMLINK libspdk_accel_dsa.so 00:04:34.710 CC module/keyring/linux/keyring_rpc.o 00:04:34.710 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:34.710 SYMLINK libspdk_fsdev_aio.so 00:04:34.710 CC module/bdev/gpt/gpt.o 00:04:34.710 CC module/bdev/error/vbdev_error_rpc.o 00:04:34.710 SYMLINK libspdk_sock_posix.so 00:04:34.710 CC module/bdev/gpt/vbdev_gpt.o 00:04:34.710 LIB libspdk_keyring_linux.a 00:04:34.977 SO libspdk_keyring_linux.so.1.0 00:04:34.977 LIB libspdk_blobfs_bdev.a 00:04:34.977 SYMLINK libspdk_keyring_linux.so 00:04:34.977 SO libspdk_blobfs_bdev.so.6.0 00:04:34.977 LIB libspdk_bdev_error.a 00:04:34.977 SYMLINK libspdk_blobfs_bdev.so 00:04:34.977 LIB libspdk_bdev_delay.a 00:04:34.977 SO libspdk_bdev_error.so.6.0 00:04:34.977 CC module/bdev/lvol/vbdev_lvol.o 00:04:34.977 SO libspdk_bdev_delay.so.6.0 00:04:34.977 LIB libspdk_bdev_gpt.a 00:04:34.977 CC module/bdev/malloc/bdev_malloc.o 00:04:34.977 LIB libspdk_sock_uring.a 00:04:34.977 SYMLINK libspdk_bdev_error.so 00:04:34.977 CC module/bdev/null/bdev_null.o 00:04:34.977 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.977 SO libspdk_bdev_gpt.so.6.0 00:04:34.977 CC module/bdev/nvme/bdev_nvme.o 00:04:35.235 SO libspdk_sock_uring.so.5.0 00:04:35.235 CC module/bdev/passthru/vbdev_passthru.o 00:04:35.235 SYMLINK libspdk_bdev_delay.so 00:04:35.235 SYMLINK libspdk_bdev_gpt.so 00:04:35.235 CC module/bdev/raid/bdev_raid.o 00:04:35.235 CC module/bdev/null/bdev_null_rpc.o 00:04:35.235 SYMLINK libspdk_sock_uring.so 00:04:35.235 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:35.235 CC module/bdev/split/vbdev_split.o 00:04:35.494 CC module/bdev/split/vbdev_split_rpc.o 00:04:35.494 LIB libspdk_bdev_null.a 00:04:35.494 SO libspdk_bdev_null.so.6.0 00:04:35.494 LIB libspdk_bdev_passthru.a 00:04:35.494 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:35.494 LIB libspdk_bdev_malloc.a 00:04:35.494 SO libspdk_bdev_passthru.so.6.0 00:04:35.494 SYMLINK libspdk_bdev_null.so 00:04:35.494 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:35.494 SO libspdk_bdev_malloc.so.6.0 00:04:35.494 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:35.494 CC module/bdev/uring/bdev_uring.o 00:04:35.494 SYMLINK libspdk_bdev_passthru.so 00:04:35.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:35.494 LIB libspdk_bdev_split.a 00:04:35.494 SYMLINK libspdk_bdev_malloc.so 00:04:35.494 CC module/bdev/nvme/nvme_rpc.o 00:04:35.494 SO libspdk_bdev_split.so.6.0 00:04:35.752 CC module/bdev/raid/bdev_raid_rpc.o 00:04:35.752 SYMLINK libspdk_bdev_split.so 00:04:35.752 CC module/bdev/uring/bdev_uring_rpc.o 00:04:35.752 CC module/bdev/aio/bdev_aio.o 00:04:35.752 LIB libspdk_bdev_zone_block.a 00:04:35.752 SO libspdk_bdev_zone_block.so.6.0 00:04:35.752 CC module/bdev/nvme/bdev_mdns_client.o 00:04:35.752 CC module/bdev/nvme/vbdev_opal.o 00:04:36.011 SYMLINK libspdk_bdev_zone_block.so 00:04:36.011 CC module/bdev/aio/bdev_aio_rpc.o 00:04:36.011 LIB libspdk_bdev_uring.a 00:04:36.011 LIB libspdk_bdev_lvol.a 00:04:36.011 SO libspdk_bdev_uring.so.6.0 00:04:36.011 SO libspdk_bdev_lvol.so.6.0 00:04:36.011 SYMLINK libspdk_bdev_uring.so 00:04:36.011 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.011 SYMLINK libspdk_bdev_lvol.so 00:04:36.011 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.011 CC module/bdev/ftl/bdev_ftl.o 00:04:36.011 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:36.011 LIB libspdk_bdev_aio.a 00:04:36.011 SO libspdk_bdev_aio.so.6.0 00:04:36.269 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.269 SYMLINK libspdk_bdev_aio.so 00:04:36.269 CC module/bdev/raid/raid0.o 00:04:36.269 CC module/bdev/iscsi/bdev_iscsi.o 00:04:36.269 CC module/bdev/raid/raid1.o 00:04:36.269 CC module/bdev/raid/concat.o 00:04:36.269 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:36.528 LIB libspdk_bdev_ftl.a 00:04:36.528 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:36.528 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:36.528 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:36.528 SO libspdk_bdev_ftl.so.6.0 00:04:36.528 SYMLINK libspdk_bdev_ftl.so 00:04:36.528 LIB libspdk_bdev_raid.a 00:04:36.528 LIB libspdk_bdev_iscsi.a 00:04:36.528 SO libspdk_bdev_raid.so.6.0 00:04:36.787 SO libspdk_bdev_iscsi.so.6.0 00:04:36.787 SYMLINK libspdk_bdev_raid.so 00:04:36.787 SYMLINK libspdk_bdev_iscsi.so 00:04:37.047 LIB libspdk_bdev_virtio.a 00:04:37.047 SO libspdk_bdev_virtio.so.6.0 00:04:37.047 SYMLINK libspdk_bdev_virtio.so 00:04:37.615 LIB libspdk_bdev_nvme.a 00:04:37.874 SO libspdk_bdev_nvme.so.7.1 00:04:37.874 SYMLINK libspdk_bdev_nvme.so 00:04:38.441 CC module/event/subsystems/iobuf/iobuf.o 00:04:38.441 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:38.441 CC module/event/subsystems/scheduler/scheduler.o 00:04:38.441 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:38.441 CC module/event/subsystems/vmd/vmd.o 00:04:38.441 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:38.441 CC module/event/subsystems/sock/sock.o 00:04:38.441 CC module/event/subsystems/keyring/keyring.o 00:04:38.441 CC module/event/subsystems/fsdev/fsdev.o 00:04:38.441 LIB libspdk_event_sock.a 00:04:38.441 LIB libspdk_event_scheduler.a 00:04:38.441 LIB libspdk_event_keyring.a 00:04:38.441 LIB libspdk_event_vhost_blk.a 00:04:38.441 LIB libspdk_event_iobuf.a 00:04:38.441 LIB libspdk_event_vmd.a 00:04:38.441 SO libspdk_event_sock.so.5.0 00:04:38.441 LIB libspdk_event_fsdev.a 00:04:38.441 SO libspdk_event_scheduler.so.4.0 00:04:38.441 SO libspdk_event_keyring.so.1.0 00:04:38.700 SO libspdk_event_vhost_blk.so.3.0 00:04:38.700 SO libspdk_event_iobuf.so.3.0 00:04:38.700 SO libspdk_event_vmd.so.6.0 00:04:38.700 SO libspdk_event_fsdev.so.1.0 00:04:38.700 SYMLINK libspdk_event_sock.so 00:04:38.700 SYMLINK libspdk_event_scheduler.so 00:04:38.700 SYMLINK libspdk_event_keyring.so 00:04:38.700 SYMLINK libspdk_event_vhost_blk.so 00:04:38.700 SYMLINK libspdk_event_iobuf.so 00:04:38.700 SYMLINK libspdk_event_fsdev.so 00:04:38.700 SYMLINK libspdk_event_vmd.so 00:04:38.959 CC module/event/subsystems/accel/accel.o 00:04:39.218 LIB libspdk_event_accel.a 00:04:39.218 SO libspdk_event_accel.so.6.0 00:04:39.218 SYMLINK libspdk_event_accel.so 00:04:39.476 CC module/event/subsystems/bdev/bdev.o 00:04:39.735 LIB libspdk_event_bdev.a 00:04:39.735 SO libspdk_event_bdev.so.6.0 00:04:39.735 SYMLINK libspdk_event_bdev.so 00:04:39.993 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.993 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.993 CC module/event/subsystems/scsi/scsi.o 00:04:39.993 CC module/event/subsystems/ublk/ublk.o 00:04:39.993 CC module/event/subsystems/nbd/nbd.o 00:04:40.252 LIB libspdk_event_ublk.a 00:04:40.252 LIB libspdk_event_nbd.a 00:04:40.252 LIB libspdk_event_scsi.a 00:04:40.252 SO libspdk_event_ublk.so.3.0 00:04:40.252 SO libspdk_event_nbd.so.6.0 00:04:40.252 SO libspdk_event_scsi.so.6.0 00:04:40.252 SYMLINK libspdk_event_ublk.so 00:04:40.252 LIB libspdk_event_nvmf.a 00:04:40.252 SYMLINK libspdk_event_nbd.so 00:04:40.252 SYMLINK libspdk_event_scsi.so 00:04:40.510 SO libspdk_event_nvmf.so.6.0 00:04:40.510 SYMLINK libspdk_event_nvmf.so 00:04:40.510 CC module/event/subsystems/iscsi/iscsi.o 00:04:40.511 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:40.770 LIB libspdk_event_vhost_scsi.a 00:04:40.770 LIB libspdk_event_iscsi.a 00:04:40.770 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.770 SO libspdk_event_iscsi.so.6.0 00:04:41.029 SYMLINK libspdk_event_vhost_scsi.so 00:04:41.029 SYMLINK libspdk_event_iscsi.so 00:04:41.029 SO libspdk.so.6.0 00:04:41.029 SYMLINK libspdk.so 00:04:41.288 CC test/rpc_client/rpc_client_test.o 00:04:41.288 TEST_HEADER include/spdk/accel.h 00:04:41.288 CXX app/trace/trace.o 00:04:41.288 TEST_HEADER include/spdk/accel_module.h 00:04:41.288 TEST_HEADER include/spdk/assert.h 00:04:41.288 TEST_HEADER include/spdk/barrier.h 00:04:41.288 TEST_HEADER include/spdk/base64.h 00:04:41.288 TEST_HEADER include/spdk/bdev.h 00:04:41.288 TEST_HEADER include/spdk/bdev_module.h 00:04:41.288 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.288 TEST_HEADER include/spdk/bit_array.h 00:04:41.288 TEST_HEADER include/spdk/bit_pool.h 00:04:41.288 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.288 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.288 TEST_HEADER include/spdk/blobfs.h 00:04:41.288 TEST_HEADER include/spdk/blob.h 00:04:41.288 TEST_HEADER include/spdk/conf.h 00:04:41.288 TEST_HEADER include/spdk/config.h 00:04:41.288 TEST_HEADER include/spdk/cpuset.h 00:04:41.288 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:41.288 TEST_HEADER include/spdk/crc16.h 00:04:41.547 TEST_HEADER include/spdk/crc32.h 00:04:41.547 TEST_HEADER include/spdk/crc64.h 00:04:41.547 TEST_HEADER include/spdk/dif.h 00:04:41.547 TEST_HEADER include/spdk/dma.h 00:04:41.547 TEST_HEADER include/spdk/endian.h 00:04:41.547 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.547 TEST_HEADER include/spdk/env.h 00:04:41.547 TEST_HEADER include/spdk/event.h 00:04:41.547 TEST_HEADER include/spdk/fd_group.h 00:04:41.547 TEST_HEADER include/spdk/fd.h 00:04:41.547 TEST_HEADER include/spdk/file.h 00:04:41.547 TEST_HEADER include/spdk/fsdev.h 00:04:41.547 TEST_HEADER include/spdk/fsdev_module.h 00:04:41.547 TEST_HEADER include/spdk/ftl.h 00:04:41.547 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:41.547 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.547 TEST_HEADER include/spdk/hexlify.h 00:04:41.547 TEST_HEADER include/spdk/histogram_data.h 00:04:41.547 CC examples/ioat/perf/perf.o 00:04:41.548 TEST_HEADER include/spdk/idxd.h 00:04:41.548 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.548 CC test/thread/poller_perf/poller_perf.o 00:04:41.548 CC examples/util/zipf/zipf.o 00:04:41.548 TEST_HEADER include/spdk/init.h 00:04:41.548 TEST_HEADER include/spdk/ioat.h 00:04:41.548 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.548 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.548 TEST_HEADER include/spdk/json.h 00:04:41.548 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.548 TEST_HEADER include/spdk/keyring.h 00:04:41.548 TEST_HEADER include/spdk/keyring_module.h 00:04:41.548 TEST_HEADER include/spdk/likely.h 00:04:41.548 TEST_HEADER include/spdk/log.h 00:04:41.548 TEST_HEADER include/spdk/lvol.h 00:04:41.548 TEST_HEADER include/spdk/md5.h 00:04:41.548 TEST_HEADER include/spdk/memory.h 00:04:41.548 TEST_HEADER include/spdk/mmio.h 00:04:41.548 TEST_HEADER include/spdk/nbd.h 00:04:41.548 CC test/dma/test_dma/test_dma.o 00:04:41.548 TEST_HEADER include/spdk/net.h 00:04:41.548 TEST_HEADER include/spdk/notify.h 00:04:41.548 TEST_HEADER include/spdk/nvme.h 00:04:41.548 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.548 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.548 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.548 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.548 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.548 CC test/app/bdev_svc/bdev_svc.o 00:04:41.548 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.548 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.548 TEST_HEADER include/spdk/nvmf.h 00:04:41.548 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.548 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.548 TEST_HEADER include/spdk/opal.h 00:04:41.548 TEST_HEADER include/spdk/opal_spec.h 00:04:41.548 TEST_HEADER include/spdk/pci_ids.h 00:04:41.548 TEST_HEADER include/spdk/pipe.h 00:04:41.548 TEST_HEADER include/spdk/queue.h 00:04:41.548 TEST_HEADER include/spdk/reduce.h 00:04:41.548 TEST_HEADER include/spdk/rpc.h 00:04:41.548 TEST_HEADER include/spdk/scheduler.h 00:04:41.548 LINK rpc_client_test 00:04:41.548 TEST_HEADER include/spdk/scsi.h 00:04:41.548 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.548 TEST_HEADER include/spdk/sock.h 00:04:41.548 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.548 TEST_HEADER include/spdk/stdinc.h 00:04:41.548 TEST_HEADER include/spdk/string.h 00:04:41.548 TEST_HEADER include/spdk/thread.h 00:04:41.548 TEST_HEADER include/spdk/trace.h 00:04:41.548 TEST_HEADER include/spdk/trace_parser.h 00:04:41.548 TEST_HEADER include/spdk/tree.h 00:04:41.548 TEST_HEADER include/spdk/ublk.h 00:04:41.548 TEST_HEADER include/spdk/util.h 00:04:41.548 TEST_HEADER include/spdk/uuid.h 00:04:41.548 TEST_HEADER include/spdk/version.h 00:04:41.548 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.548 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.548 TEST_HEADER include/spdk/vhost.h 00:04:41.548 TEST_HEADER include/spdk/vmd.h 00:04:41.548 TEST_HEADER include/spdk/xor.h 00:04:41.548 TEST_HEADER include/spdk/zipf.h 00:04:41.807 CXX test/cpp_headers/accel.o 00:04:41.807 LINK zipf 00:04:41.807 LINK poller_perf 00:04:41.807 LINK interrupt_tgt 00:04:41.807 LINK ioat_perf 00:04:41.807 LINK bdev_svc 00:04:41.807 LINK spdk_trace 00:04:41.807 CXX test/cpp_headers/accel_module.o 00:04:41.807 CC app/trace_record/trace_record.o 00:04:42.066 CC test/env/vtophys/vtophys.o 00:04:42.066 CC app/nvmf_tgt/nvmf_main.o 00:04:42.066 CC app/iscsi_tgt/iscsi_tgt.o 00:04:42.066 CXX test/cpp_headers/assert.o 00:04:42.066 CC examples/ioat/verify/verify.o 00:04:42.066 LINK test_dma 00:04:42.066 LINK vtophys 00:04:42.066 LINK spdk_trace_record 00:04:42.066 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.324 LINK nvmf_tgt 00:04:42.324 CXX test/cpp_headers/barrier.o 00:04:42.324 CC app/spdk_tgt/spdk_tgt.o 00:04:42.324 LINK iscsi_tgt 00:04:42.324 LINK verify 00:04:42.324 LINK mem_callbacks 00:04:42.324 CXX test/cpp_headers/base64.o 00:04:42.324 CXX test/cpp_headers/bdev.o 00:04:42.324 CC app/spdk_lspci/spdk_lspci.o 00:04:42.583 LINK spdk_tgt 00:04:42.583 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:42.583 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:42.583 CC test/env/memory/memory_ut.o 00:04:42.583 CC test/env/pci/pci_ut.o 00:04:42.583 CXX test/cpp_headers/bdev_module.o 00:04:42.583 LINK spdk_lspci 00:04:42.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.583 LINK nvme_fuzz 00:04:42.583 CC examples/thread/thread/thread_ex.o 00:04:42.583 LINK env_dpdk_post_init 00:04:42.583 CXX test/cpp_headers/bdev_zone.o 00:04:42.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.841 CC app/spdk_nvme_perf/perf.o 00:04:42.841 CC app/spdk_nvme_identify/identify.o 00:04:42.841 CC app/spdk_nvme_discover/discovery_aer.o 00:04:42.841 CXX test/cpp_headers/bit_array.o 00:04:42.841 LINK pci_ut 00:04:42.841 LINK thread 00:04:42.841 CC test/event/event_perf/event_perf.o 00:04:43.101 CXX test/cpp_headers/bit_pool.o 00:04:43.101 LINK spdk_nvme_discover 00:04:43.101 LINK vhost_fuzz 00:04:43.101 LINK event_perf 00:04:43.101 CXX test/cpp_headers/blob_bdev.o 00:04:43.365 CC examples/sock/hello_world/hello_sock.o 00:04:43.365 CXX test/cpp_headers/blobfs_bdev.o 00:04:43.365 CC test/event/reactor/reactor.o 00:04:43.365 CC test/nvme/aer/aer.o 00:04:43.365 CC test/event/reactor_perf/reactor_perf.o 00:04:43.365 LINK reactor 00:04:43.623 CXX test/cpp_headers/blobfs.o 00:04:43.623 LINK reactor_perf 00:04:43.623 LINK hello_sock 00:04:43.623 LINK spdk_nvme_identify 00:04:43.623 LINK aer 00:04:43.623 CC test/accel/dif/dif.o 00:04:43.623 CXX test/cpp_headers/blob.o 00:04:43.623 LINK spdk_nvme_perf 00:04:43.623 LINK memory_ut 00:04:43.881 CC test/event/app_repeat/app_repeat.o 00:04:43.881 CXX test/cpp_headers/conf.o 00:04:43.881 CC test/nvme/reset/reset.o 00:04:43.881 CC examples/vmd/lsvmd/lsvmd.o 00:04:43.881 CC examples/idxd/perf/perf.o 00:04:43.881 CC app/spdk_top/spdk_top.o 00:04:43.881 LINK app_repeat 00:04:43.881 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:43.881 CXX test/cpp_headers/config.o 00:04:43.881 CXX test/cpp_headers/cpuset.o 00:04:44.139 LINK lsvmd 00:04:44.140 CC examples/accel/perf/accel_perf.o 00:04:44.140 LINK iscsi_fuzz 00:04:44.140 LINK reset 00:04:44.140 CXX test/cpp_headers/crc16.o 00:04:44.140 CC test/event/scheduler/scheduler.o 00:04:44.140 LINK idxd_perf 00:04:44.140 CC examples/vmd/led/led.o 00:04:44.140 LINK hello_fsdev 00:04:44.398 LINK dif 00:04:44.398 CXX test/cpp_headers/crc32.o 00:04:44.398 CXX test/cpp_headers/crc64.o 00:04:44.398 CC test/nvme/sgl/sgl.o 00:04:44.398 CC test/app/histogram_perf/histogram_perf.o 00:04:44.398 LINK led 00:04:44.398 LINK scheduler 00:04:44.663 LINK accel_perf 00:04:44.663 CXX test/cpp_headers/dif.o 00:04:44.663 LINK histogram_perf 00:04:44.663 CC examples/nvme/hello_world/hello_world.o 00:04:44.663 CC examples/blob/hello_world/hello_blob.o 00:04:44.663 LINK sgl 00:04:44.663 CXX test/cpp_headers/dma.o 00:04:44.663 CC test/blobfs/mkfs/mkfs.o 00:04:44.663 LINK spdk_top 00:04:44.663 CC examples/blob/cli/blobcli.o 00:04:44.927 CC test/nvme/e2edp/nvme_dp.o 00:04:44.927 CC test/app/jsoncat/jsoncat.o 00:04:44.927 CC test/lvol/esnap/esnap.o 00:04:44.927 CXX test/cpp_headers/endian.o 00:04:44.927 LINK hello_world 00:04:44.927 CC test/nvme/overhead/overhead.o 00:04:44.927 LINK hello_blob 00:04:44.927 LINK jsoncat 00:04:44.927 LINK mkfs 00:04:45.186 CC app/vhost/vhost.o 00:04:45.186 CXX test/cpp_headers/env_dpdk.o 00:04:45.186 LINK nvme_dp 00:04:45.186 CXX test/cpp_headers/env.o 00:04:45.186 CC examples/nvme/reconnect/reconnect.o 00:04:45.186 CC test/app/stub/stub.o 00:04:45.186 CXX test/cpp_headers/event.o 00:04:45.186 LINK overhead 00:04:45.186 LINK vhost 00:04:45.186 LINK blobcli 00:04:45.444 CC test/nvme/err_injection/err_injection.o 00:04:45.444 CXX test/cpp_headers/fd_group.o 00:04:45.444 LINK stub 00:04:45.444 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:45.444 CC test/nvme/startup/startup.o 00:04:45.444 LINK reconnect 00:04:45.444 LINK err_injection 00:04:45.702 CXX test/cpp_headers/fd.o 00:04:45.702 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.702 CC app/spdk_dd/spdk_dd.o 00:04:45.702 CXX test/cpp_headers/file.o 00:04:45.702 CC examples/bdev/bdevperf/bdevperf.o 00:04:45.702 LINK startup 00:04:45.702 CC test/bdev/bdevio/bdevio.o 00:04:45.702 CC test/nvme/reserve/reserve.o 00:04:45.961 CXX test/cpp_headers/fsdev.o 00:04:45.961 LINK hello_bdev 00:04:45.961 CC test/nvme/simple_copy/simple_copy.o 00:04:45.961 CXX test/cpp_headers/fsdev_module.o 00:04:45.961 LINK nvme_manage 00:04:45.961 LINK reserve 00:04:45.961 CXX test/cpp_headers/ftl.o 00:04:45.961 LINK spdk_dd 00:04:46.220 CC test/nvme/connect_stress/connect_stress.o 00:04:46.220 LINK simple_copy 00:04:46.220 CC examples/nvme/arbitration/arbitration.o 00:04:46.220 CC test/nvme/boot_partition/boot_partition.o 00:04:46.220 LINK bdevio 00:04:46.220 CC test/nvme/compliance/nvme_compliance.o 00:04:46.220 CXX test/cpp_headers/fuse_dispatcher.o 00:04:46.220 LINK connect_stress 00:04:46.220 LINK boot_partition 00:04:46.220 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.478 CC examples/nvme/hotplug/hotplug.o 00:04:46.478 LINK bdevperf 00:04:46.478 CC app/fio/nvme/fio_plugin.o 00:04:46.478 LINK arbitration 00:04:46.478 CXX test/cpp_headers/gpt_spec.o 00:04:46.478 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.478 LINK fused_ordering 00:04:46.478 LINK nvme_compliance 00:04:46.478 CC test/nvme/fdp/fdp.o 00:04:46.478 CXX test/cpp_headers/hexlify.o 00:04:46.736 CXX test/cpp_headers/histogram_data.o 00:04:46.736 LINK hotplug 00:04:46.736 CXX test/cpp_headers/idxd.o 00:04:46.736 CC app/fio/bdev/fio_plugin.o 00:04:46.736 LINK doorbell_aers 00:04:46.736 CC test/nvme/cuse/cuse.o 00:04:46.736 CXX test/cpp_headers/idxd_spec.o 00:04:46.736 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.995 CC examples/nvme/abort/abort.o 00:04:46.995 CXX test/cpp_headers/init.o 00:04:46.995 CXX test/cpp_headers/ioat.o 00:04:46.995 LINK fdp 00:04:46.995 LINK spdk_nvme 00:04:46.995 CXX test/cpp_headers/ioat_spec.o 00:04:46.995 LINK cmb_copy 00:04:46.995 CXX test/cpp_headers/iscsi_spec.o 00:04:46.995 CXX test/cpp_headers/json.o 00:04:46.995 CXX test/cpp_headers/jsonrpc.o 00:04:47.255 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:47.255 CXX test/cpp_headers/keyring.o 00:04:47.255 LINK spdk_bdev 00:04:47.255 CXX test/cpp_headers/keyring_module.o 00:04:47.255 CXX test/cpp_headers/likely.o 00:04:47.255 CXX test/cpp_headers/log.o 00:04:47.255 LINK abort 00:04:47.255 CXX test/cpp_headers/lvol.o 00:04:47.255 LINK pmr_persistence 00:04:47.255 CXX test/cpp_headers/md5.o 00:04:47.513 CXX test/cpp_headers/memory.o 00:04:47.514 CXX test/cpp_headers/mmio.o 00:04:47.514 CXX test/cpp_headers/nbd.o 00:04:47.514 CXX test/cpp_headers/net.o 00:04:47.514 CXX test/cpp_headers/notify.o 00:04:47.514 CXX test/cpp_headers/nvme.o 00:04:47.514 CXX test/cpp_headers/nvme_intel.o 00:04:47.514 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.514 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.514 CXX test/cpp_headers/nvme_spec.o 00:04:47.514 CXX test/cpp_headers/nvme_zns.o 00:04:47.514 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.773 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.773 CXX test/cpp_headers/nvmf.o 00:04:47.773 CC examples/nvmf/nvmf/nvmf.o 00:04:47.773 CXX test/cpp_headers/nvmf_spec.o 00:04:47.773 CXX test/cpp_headers/nvmf_transport.o 00:04:47.773 CXX test/cpp_headers/opal.o 00:04:47.773 CXX test/cpp_headers/opal_spec.o 00:04:47.773 CXX test/cpp_headers/pci_ids.o 00:04:47.773 CXX test/cpp_headers/pipe.o 00:04:47.773 CXX test/cpp_headers/queue.o 00:04:48.033 CXX test/cpp_headers/reduce.o 00:04:48.033 CXX test/cpp_headers/rpc.o 00:04:48.033 CXX test/cpp_headers/scheduler.o 00:04:48.033 CXX test/cpp_headers/scsi.o 00:04:48.033 LINK nvmf 00:04:48.033 CXX test/cpp_headers/scsi_spec.o 00:04:48.033 CXX test/cpp_headers/sock.o 00:04:48.033 CXX test/cpp_headers/stdinc.o 00:04:48.033 CXX test/cpp_headers/string.o 00:04:48.033 CXX test/cpp_headers/thread.o 00:04:48.292 LINK cuse 00:04:48.292 CXX test/cpp_headers/trace.o 00:04:48.292 CXX test/cpp_headers/trace_parser.o 00:04:48.292 CXX test/cpp_headers/tree.o 00:04:48.292 CXX test/cpp_headers/ublk.o 00:04:48.292 CXX test/cpp_headers/util.o 00:04:48.292 CXX test/cpp_headers/uuid.o 00:04:48.292 CXX test/cpp_headers/version.o 00:04:48.292 CXX test/cpp_headers/vfio_user_pci.o 00:04:48.292 CXX test/cpp_headers/vfio_user_spec.o 00:04:48.292 CXX test/cpp_headers/vhost.o 00:04:48.292 CXX test/cpp_headers/vmd.o 00:04:48.292 CXX test/cpp_headers/xor.o 00:04:48.292 CXX test/cpp_headers/zipf.o 00:04:50.194 LINK esnap 00:04:50.194 00:04:50.194 real 1m30.827s 00:04:50.194 user 8m15.890s 00:04:50.194 sys 1m44.497s 00:04:50.194 14:43:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:50.194 ************************************ 00:04:50.194 END TEST make 00:04:50.195 ************************************ 00:04:50.195 14:43:04 make -- common/autotest_common.sh@10 -- $ set +x 00:04:50.195 14:43:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:50.195 14:43:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:50.195 14:43:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:50.195 14:43:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.195 14:43:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:50.195 14:43:04 -- pm/common@44 -- $ pid=5406 00:04:50.195 14:43:04 -- pm/common@50 -- $ kill -TERM 5406 00:04:50.195 14:43:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.195 14:43:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:50.195 14:43:04 -- pm/common@44 -- $ pid=5408 00:04:50.195 14:43:04 -- pm/common@50 -- $ kill -TERM 5408 00:04:50.195 14:43:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:50.195 14:43:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:50.457 14:43:04 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.457 14:43:04 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.457 14:43:04 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.457 14:43:04 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.457 14:43:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.457 14:43:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.457 14:43:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.457 14:43:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.457 14:43:04 -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.457 14:43:04 -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.457 14:43:04 -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.457 14:43:04 -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.457 14:43:04 -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.457 14:43:04 -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.457 14:43:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.457 14:43:04 -- scripts/common.sh@344 -- # case "$op" in 00:04:50.457 14:43:04 -- scripts/common.sh@345 -- # : 1 00:04:50.457 14:43:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.457 14:43:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.457 14:43:04 -- scripts/common.sh@365 -- # decimal 1 00:04:50.457 14:43:04 -- scripts/common.sh@353 -- # local d=1 00:04:50.457 14:43:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.457 14:43:04 -- scripts/common.sh@355 -- # echo 1 00:04:50.457 14:43:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.457 14:43:04 -- scripts/common.sh@366 -- # decimal 2 00:04:50.457 14:43:04 -- scripts/common.sh@353 -- # local d=2 00:04:50.457 14:43:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.457 14:43:04 -- scripts/common.sh@355 -- # echo 2 00:04:50.457 14:43:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.457 14:43:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.457 14:43:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.457 14:43:04 -- scripts/common.sh@368 -- # return 0 00:04:50.457 14:43:04 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.457 14:43:04 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.457 --rc genhtml_branch_coverage=1 00:04:50.457 --rc genhtml_function_coverage=1 00:04:50.457 --rc genhtml_legend=1 00:04:50.457 --rc geninfo_all_blocks=1 00:04:50.457 --rc geninfo_unexecuted_blocks=1 00:04:50.457 00:04:50.457 ' 00:04:50.457 14:43:04 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.457 --rc genhtml_branch_coverage=1 00:04:50.457 --rc genhtml_function_coverage=1 00:04:50.457 --rc genhtml_legend=1 00:04:50.457 --rc geninfo_all_blocks=1 00:04:50.457 --rc geninfo_unexecuted_blocks=1 00:04:50.457 00:04:50.458 ' 00:04:50.458 14:43:04 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.458 --rc genhtml_branch_coverage=1 00:04:50.458 --rc genhtml_function_coverage=1 00:04:50.458 --rc genhtml_legend=1 00:04:50.458 --rc geninfo_all_blocks=1 00:04:50.458 --rc geninfo_unexecuted_blocks=1 00:04:50.458 00:04:50.458 ' 00:04:50.458 14:43:04 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.458 --rc genhtml_branch_coverage=1 00:04:50.458 --rc genhtml_function_coverage=1 00:04:50.458 --rc genhtml_legend=1 00:04:50.458 --rc geninfo_all_blocks=1 00:04:50.458 --rc geninfo_unexecuted_blocks=1 00:04:50.458 00:04:50.458 ' 00:04:50.458 14:43:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.458 14:43:04 -- nvmf/common.sh@7 -- # uname -s 00:04:50.458 14:43:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.458 14:43:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.458 14:43:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.458 14:43:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.458 14:43:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.458 14:43:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.458 14:43:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.458 14:43:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.458 14:43:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.458 14:43:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.458 14:43:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:04:50.458 14:43:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:04:50.458 14:43:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.458 14:43:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.458 14:43:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:50.458 14:43:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.458 14:43:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.458 14:43:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.458 14:43:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.458 14:43:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.458 14:43:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.458 14:43:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.458 14:43:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.458 14:43:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.458 14:43:04 -- paths/export.sh@5 -- # export PATH 00:04:50.458 14:43:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.458 14:43:04 -- nvmf/common.sh@51 -- # : 0 00:04:50.458 14:43:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.458 14:43:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.458 14:43:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.458 14:43:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.458 14:43:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.458 14:43:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.458 14:43:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.458 14:43:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.458 14:43:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.458 14:43:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:50.458 14:43:04 -- spdk/autotest.sh@32 -- # uname -s 00:04:50.458 14:43:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:50.458 14:43:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:50.458 14:43:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.458 14:43:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:50.458 14:43:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.458 14:43:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:50.458 14:43:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:50.458 14:43:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:50.458 14:43:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54527 00:04:50.458 14:43:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:50.458 14:43:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:50.458 14:43:05 -- pm/common@17 -- # local monitor 00:04:50.458 14:43:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.458 14:43:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.458 14:43:05 -- pm/common@25 -- # sleep 1 00:04:50.458 14:43:05 -- pm/common@21 -- # date +%s 00:04:50.458 14:43:05 -- pm/common@21 -- # date +%s 00:04:50.458 14:43:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732286585 00:04:50.458 14:43:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732286585 00:04:50.458 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732286585_collect-cpu-load.pm.log 00:04:50.458 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732286585_collect-vmstat.pm.log 00:04:51.831 14:43:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:51.831 14:43:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:51.831 14:43:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.831 14:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.831 14:43:06 -- spdk/autotest.sh@59 -- # create_test_list 00:04:51.831 14:43:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:51.831 14:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.831 14:43:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:51.831 14:43:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:51.831 14:43:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:51.831 14:43:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:51.831 14:43:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:51.831 14:43:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:51.831 14:43:06 -- common/autotest_common.sh@1457 -- # uname 00:04:51.831 14:43:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:51.831 14:43:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:51.831 14:43:06 -- common/autotest_common.sh@1477 -- # uname 00:04:51.831 14:43:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:51.831 14:43:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:51.831 14:43:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:51.831 lcov: LCOV version 1.15 00:04:51.831 14:43:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:06.714 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:06.714 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.801 14:43:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:24.801 14:43:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.801 14:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.801 14:43:36 -- spdk/autotest.sh@78 -- # rm -f 00:05:24.801 14:43:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:24.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:24.801 14:43:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:24.801 14:43:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:24.801 14:43:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:24.801 14:43:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:24.801 14:43:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.801 14:43:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:24.801 14:43:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:24.801 14:43:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.801 14:43:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:24.801 14:43:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:24.801 14:43:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.801 14:43:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:24.801 14:43:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:24.801 14:43:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:24.801 14:43:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:24.801 14:43:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:24.801 14:43:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:24.801 14:43:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.801 14:43:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:24.801 14:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.801 14:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.801 14:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:24.801 14:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:24.801 14:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:24.801 No valid GPT data, bailing 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # pt= 00:05:24.801 14:43:37 -- scripts/common.sh@395 -- # return 1 00:05:24.801 14:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:24.801 1+0 records in 00:05:24.801 1+0 records out 00:05:24.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439296 s, 239 MB/s 00:05:24.801 14:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.801 14:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.801 14:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:24.801 14:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:24.801 14:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:24.801 No valid GPT data, bailing 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # pt= 00:05:24.801 14:43:37 -- scripts/common.sh@395 -- # return 1 00:05:24.801 14:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:24.801 1+0 records in 00:05:24.801 1+0 records out 00:05:24.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506253 s, 207 MB/s 00:05:24.801 14:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.801 14:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.801 14:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:24.801 14:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:24.801 14:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:24.801 No valid GPT data, bailing 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # pt= 00:05:24.801 14:43:37 -- scripts/common.sh@395 -- # return 1 00:05:24.801 14:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:24.801 1+0 records in 00:05:24.801 1+0 records out 00:05:24.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0035454 s, 296 MB/s 00:05:24.801 14:43:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.801 14:43:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.801 14:43:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:24.801 14:43:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:24.801 14:43:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:24.801 No valid GPT data, bailing 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:24.801 14:43:37 -- scripts/common.sh@394 -- # pt= 00:05:24.801 14:43:37 -- scripts/common.sh@395 -- # return 1 00:05:24.801 14:43:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:24.801 1+0 records in 00:05:24.801 1+0 records out 00:05:24.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456075 s, 230 MB/s 00:05:24.801 14:43:37 -- spdk/autotest.sh@105 -- # sync 00:05:24.802 14:43:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:24.802 14:43:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:24.802 14:43:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:25.737 14:43:40 -- spdk/autotest.sh@111 -- # uname -s 00:05:25.737 14:43:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:25.737 14:43:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:25.737 14:43:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:26.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.674 Hugepages 00:05:26.674 node hugesize free / total 00:05:26.674 node0 1048576kB 0 / 0 00:05:26.674 node0 2048kB 0 / 0 00:05:26.674 00:05:26.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.674 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:26.674 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:26.674 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:26.674 14:43:41 -- spdk/autotest.sh@117 -- # uname -s 00:05:26.674 14:43:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:26.674 14:43:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:26.674 14:43:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.501 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.501 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.501 14:43:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:28.876 14:43:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:28.876 14:43:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:28.876 14:43:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.876 14:43:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:28.876 14:43:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:28.876 14:43:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:28.876 14:43:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.876 14:43:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.876 14:43:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:28.876 14:43:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:28.876 14:43:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:28.876 14:43:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.135 Waiting for block devices as requested 00:05:29.135 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.394 14:43:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:29.394 14:43:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:29.394 14:43:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:29.394 14:43:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:29.394 14:43:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:29.394 14:43:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:29.394 14:43:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:29.394 14:43:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:29.394 14:43:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:29.394 14:43:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:29.394 14:43:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:29.394 14:43:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:29.394 14:43:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:29.394 14:43:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:29.394 14:43:43 -- common/autotest_common.sh@1543 -- # continue 00:05:29.394 14:43:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:29.394 14:43:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:29.394 14:43:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:29.394 14:43:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:29.395 14:43:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:29.395 14:43:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:29.395 14:43:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:29.395 14:43:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:29.395 14:43:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:29.395 14:43:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:29.395 14:43:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:29.395 14:43:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:29.395 14:43:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:29.395 14:43:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:29.395 14:43:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:29.395 14:43:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:29.395 14:43:43 -- common/autotest_common.sh@1543 -- # continue 00:05:29.395 14:43:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:29.395 14:43:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.395 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.395 14:43:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:29.395 14:43:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.395 14:43:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.395 14:43:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.220 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.220 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.220 14:43:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:30.220 14:43:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.220 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.479 14:43:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:30.479 14:43:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:30.479 14:43:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.479 14:43:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:30.479 14:43:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:30.479 14:43:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:30.479 14:43:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:30.479 14:43:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:30.479 14:43:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:30.479 14:43:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:30.479 14:43:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.479 14:43:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.479 14:43:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.479 14:43:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:30.479 14:43:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:30.479 14:43:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.479 14:43:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:30.479 14:43:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:30.479 14:43:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.479 14:43:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.479 14:43:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:30.479 14:43:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:30.479 14:43:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.479 14:43:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:30.479 14:43:44 -- common/autotest_common.sh@1572 -- # return 0 00:05:30.479 14:43:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:30.479 14:43:44 -- common/autotest_common.sh@1580 -- # return 0 00:05:30.479 14:43:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:30.479 14:43:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:30.479 14:43:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.479 14:43:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.479 14:43:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:30.479 14:43:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.479 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.479 14:43:44 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:30.479 14:43:44 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:30.479 14:43:44 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:30.479 14:43:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.479 14:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.479 14:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.479 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.479 ************************************ 00:05:30.479 START TEST env 00:05:30.479 ************************************ 00:05:30.479 14:43:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.479 * Looking for test storage... 00:05:30.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:30.479 14:43:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.479 14:43:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.479 14:43:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.739 14:43:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.739 14:43:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.739 14:43:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.739 14:43:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.739 14:43:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.739 14:43:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.739 14:43:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.739 14:43:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.740 14:43:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.740 14:43:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.740 14:43:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.740 14:43:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.740 14:43:45 env -- scripts/common.sh@344 -- # case "$op" in 00:05:30.740 14:43:45 env -- scripts/common.sh@345 -- # : 1 00:05:30.740 14:43:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.740 14:43:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.740 14:43:45 env -- scripts/common.sh@365 -- # decimal 1 00:05:30.740 14:43:45 env -- scripts/common.sh@353 -- # local d=1 00:05:30.740 14:43:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.740 14:43:45 env -- scripts/common.sh@355 -- # echo 1 00:05:30.740 14:43:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.740 14:43:45 env -- scripts/common.sh@366 -- # decimal 2 00:05:30.740 14:43:45 env -- scripts/common.sh@353 -- # local d=2 00:05:30.740 14:43:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.740 14:43:45 env -- scripts/common.sh@355 -- # echo 2 00:05:30.740 14:43:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.740 14:43:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.740 14:43:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.740 14:43:45 env -- scripts/common.sh@368 -- # return 0 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.740 --rc genhtml_branch_coverage=1 00:05:30.740 --rc genhtml_function_coverage=1 00:05:30.740 --rc genhtml_legend=1 00:05:30.740 --rc geninfo_all_blocks=1 00:05:30.740 --rc geninfo_unexecuted_blocks=1 00:05:30.740 00:05:30.740 ' 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.740 --rc genhtml_branch_coverage=1 00:05:30.740 --rc genhtml_function_coverage=1 00:05:30.740 --rc genhtml_legend=1 00:05:30.740 --rc geninfo_all_blocks=1 00:05:30.740 --rc geninfo_unexecuted_blocks=1 00:05:30.740 00:05:30.740 ' 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.740 --rc genhtml_branch_coverage=1 00:05:30.740 --rc genhtml_function_coverage=1 00:05:30.740 --rc genhtml_legend=1 00:05:30.740 --rc geninfo_all_blocks=1 00:05:30.740 --rc geninfo_unexecuted_blocks=1 00:05:30.740 00:05:30.740 ' 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.740 --rc genhtml_branch_coverage=1 00:05:30.740 --rc genhtml_function_coverage=1 00:05:30.740 --rc genhtml_legend=1 00:05:30.740 --rc geninfo_all_blocks=1 00:05:30.740 --rc geninfo_unexecuted_blocks=1 00:05:30.740 00:05:30.740 ' 00:05:30.740 14:43:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.740 14:43:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.740 14:43:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.740 ************************************ 00:05:30.740 START TEST env_memory 00:05:30.740 ************************************ 00:05:30.740 14:43:45 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.740 00:05:30.740 00:05:30.740 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.740 http://cunit.sourceforge.net/ 00:05:30.740 00:05:30.740 00:05:30.740 Suite: memory 00:05:30.740 Test: alloc and free memory map ...[2024-11-22 14:43:45.225160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.740 passed 00:05:30.740 Test: mem map translation ...[2024-11-22 14:43:45.257266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.740 [2024-11-22 14:43:45.257630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.740 [2024-11-22 14:43:45.257778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.740 [2024-11-22 14:43:45.258018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.740 passed 00:05:30.740 Test: mem map registration ...[2024-11-22 14:43:45.323606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:30.740 [2024-11-22 14:43:45.324104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:30.740 passed 00:05:31.000 Test: mem map adjacent registrations ...passed 00:05:31.000 00:05:31.000 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.000 suites 1 1 n/a 0 0 00:05:31.000 tests 4 4 4 0 0 00:05:31.000 asserts 152 152 152 0 n/a 00:05:31.000 00:05:31.000 Elapsed time = 0.217 seconds 00:05:31.000 00:05:31.000 real 0m0.236s 00:05:31.000 user 0m0.218s 00:05:31.000 ************************************ 00:05:31.000 END TEST env_memory 00:05:31.000 ************************************ 00:05:31.000 sys 0m0.011s 00:05:31.000 14:43:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.000 14:43:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:31.000 14:43:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.000 14:43:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.000 14:43:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.000 14:43:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.000 ************************************ 00:05:31.000 START TEST env_vtophys 00:05:31.000 ************************************ 00:05:31.000 14:43:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.000 EAL: lib.eal log level changed from notice to debug 00:05:31.000 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 1 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 2 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 3 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 4 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 5 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 6 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 7 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 8 as core 0 on socket 0 00:05:31.000 EAL: Detected lcore 9 as core 0 on socket 0 00:05:31.000 EAL: Maximum logical cores by configuration: 128 00:05:31.000 EAL: Detected CPU lcores: 10 00:05:31.000 EAL: Detected NUMA nodes: 1 00:05:31.000 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:31.000 EAL: Detected shared linkage of DPDK 00:05:31.000 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.000 EAL: Selected IOVA mode 'PA' 00:05:31.000 EAL: Probing VFIO support... 00:05:31.000 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.000 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:31.000 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.000 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.000 EAL: Setting up physically contiguous memory... 00:05:31.000 EAL: Setting maximum number of open files to 524288 00:05:31.000 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.000 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.000 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.000 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.000 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.000 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.000 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.000 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.000 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.000 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.000 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.000 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.000 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.000 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.000 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.000 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.000 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.000 EAL: Hugepages will be freed exactly as allocated. 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: TSC frequency is ~2200000 KHz 00:05:31.000 EAL: Main lcore 0 is ready (tid=7fac9d825a00;cpuset=[0]) 00:05:31.000 EAL: Trying to obtain current memory policy. 00:05:31.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.000 EAL: Restoring previous memory policy: 0 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.000 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.000 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:31.000 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.000 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:31.000 00:05:31.000 00:05:31.000 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.000 http://cunit.sourceforge.net/ 00:05:31.000 00:05:31.000 00:05:31.000 Suite: components_suite 00:05:31.000 Test: vtophys_malloc_test ...passed 00:05:31.000 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:31.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.000 EAL: Restoring previous memory policy: 4 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was expanded by 4MB 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was shrunk by 4MB 00:05:31.000 EAL: Trying to obtain current memory policy. 00:05:31.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.000 EAL: Restoring previous memory policy: 4 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was expanded by 6MB 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was shrunk by 6MB 00:05:31.000 EAL: Trying to obtain current memory policy. 00:05:31.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.000 EAL: Restoring previous memory policy: 4 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was expanded by 10MB 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was shrunk by 10MB 00:05:31.000 EAL: Trying to obtain current memory policy. 00:05:31.000 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.000 EAL: Restoring previous memory policy: 4 00:05:31.000 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.000 EAL: request: mp_malloc_sync 00:05:31.000 EAL: No shared files mode enabled, IPC is disabled 00:05:31.000 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.259 EAL: Trying to obtain current memory policy. 00:05:31.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.259 EAL: Restoring previous memory policy: 4 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.259 EAL: Trying to obtain current memory policy. 00:05:31.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.259 EAL: Restoring previous memory policy: 4 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.259 EAL: Trying to obtain current memory policy. 00:05:31.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.259 EAL: Restoring previous memory policy: 4 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.259 EAL: Trying to obtain current memory policy. 00:05:31.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.259 EAL: Restoring previous memory policy: 4 00:05:31.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.259 EAL: request: mp_malloc_sync 00:05:31.259 EAL: No shared files mode enabled, IPC is disabled 00:05:31.259 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.518 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.518 EAL: request: mp_malloc_sync 00:05:31.518 EAL: No shared files mode enabled, IPC is disabled 00:05:31.518 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.518 EAL: Trying to obtain current memory policy. 00:05:31.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.518 EAL: Restoring previous memory policy: 4 00:05:31.518 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.518 EAL: request: mp_malloc_sync 00:05:31.518 EAL: No shared files mode enabled, IPC is disabled 00:05:31.518 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.776 EAL: request: mp_malloc_sync 00:05:31.776 EAL: No shared files mode enabled, IPC is disabled 00:05:31.776 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.776 EAL: Trying to obtain current memory policy. 00:05:31.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.035 EAL: Restoring previous memory policy: 4 00:05:32.035 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.035 EAL: request: mp_malloc_sync 00:05:32.035 EAL: No shared files mode enabled, IPC is disabled 00:05:32.035 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.294 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.553 passed 00:05:32.553 00:05:32.553 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.553 suites 1 1 n/a 0 0 00:05:32.553 tests 2 2 2 0 0 00:05:32.553 asserts 5540 5540 5540 0 n/a 00:05:32.553 00:05:32.553 Elapsed time = 1.329 seconds 00:05:32.553 EAL: request: mp_malloc_sync 00:05:32.553 EAL: No shared files mode enabled, IPC is disabled 00:05:32.553 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.553 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.553 EAL: request: mp_malloc_sync 00:05:32.553 EAL: No shared files mode enabled, IPC is disabled 00:05:32.553 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.553 EAL: No shared files mode enabled, IPC is disabled 00:05:32.553 EAL: No shared files mode enabled, IPC is disabled 00:05:32.553 EAL: No shared files mode enabled, IPC is disabled 00:05:32.553 00:05:32.553 real 0m1.544s 00:05:32.553 user 0m0.843s 00:05:32.553 sys 0m0.560s 00:05:32.553 14:43:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.553 14:43:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.553 ************************************ 00:05:32.553 END TEST env_vtophys 00:05:32.553 ************************************ 00:05:32.553 14:43:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.553 14:43:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.553 14:43:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.553 14:43:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.553 ************************************ 00:05:32.553 START TEST env_pci 00:05:32.553 ************************************ 00:05:32.553 14:43:47 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.553 00:05:32.553 00:05:32.553 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.553 http://cunit.sourceforge.net/ 00:05:32.553 00:05:32.553 00:05:32.553 Suite: pci 00:05:32.553 Test: pci_hook ...[2024-11-22 14:43:47.087983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56753 has claimed it 00:05:32.553 passed 00:05:32.553 00:05:32.553 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.553 suites 1 1 n/a 0 0 00:05:32.553 tests 1 1 1 0 0 00:05:32.553 asserts 25 25 25 0 n/a 00:05:32.553 00:05:32.553 Elapsed time = 0.002 seconds 00:05:32.553 EAL: Cannot find device (10000:00:01.0) 00:05:32.553 EAL: Failed to attach device on primary process 00:05:32.553 00:05:32.553 real 0m0.021s 00:05:32.553 user 0m0.009s 00:05:32.553 sys 0m0.012s 00:05:32.553 14:43:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.553 14:43:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.553 ************************************ 00:05:32.553 END TEST env_pci 00:05:32.553 ************************************ 00:05:32.553 14:43:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:32.553 14:43:47 env -- env/env.sh@15 -- # uname 00:05:32.553 14:43:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:32.553 14:43:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:32.553 14:43:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.553 14:43:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:32.553 14:43:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.553 14:43:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.553 ************************************ 00:05:32.553 START TEST env_dpdk_post_init 00:05:32.553 ************************************ 00:05:32.553 14:43:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.553 EAL: Detected CPU lcores: 10 00:05:32.553 EAL: Detected NUMA nodes: 1 00:05:32.553 EAL: Detected shared linkage of DPDK 00:05:32.553 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.553 EAL: Selected IOVA mode 'PA' 00:05:32.812 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.812 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:32.812 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:32.812 Starting DPDK initialization... 00:05:32.812 Starting SPDK post initialization... 00:05:32.812 SPDK NVMe probe 00:05:32.812 Attaching to 0000:00:10.0 00:05:32.812 Attaching to 0000:00:11.0 00:05:32.812 Attached to 0000:00:10.0 00:05:32.812 Attached to 0000:00:11.0 00:05:32.812 Cleaning up... 00:05:32.812 00:05:32.812 real 0m0.192s 00:05:32.812 user 0m0.057s 00:05:32.812 sys 0m0.035s 00:05:32.812 14:43:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.812 ************************************ 00:05:32.812 END TEST env_dpdk_post_init 00:05:32.812 ************************************ 00:05:32.812 14:43:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.812 14:43:47 env -- env/env.sh@26 -- # uname 00:05:32.812 14:43:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.812 14:43:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.812 14:43:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.812 14:43:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.812 14:43:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.812 ************************************ 00:05:32.812 START TEST env_mem_callbacks 00:05:32.812 ************************************ 00:05:32.812 14:43:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.812 EAL: Detected CPU lcores: 10 00:05:32.812 EAL: Detected NUMA nodes: 1 00:05:32.812 EAL: Detected shared linkage of DPDK 00:05:32.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.812 EAL: Selected IOVA mode 'PA' 00:05:33.070 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.070 00:05:33.070 00:05:33.070 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.070 http://cunit.sourceforge.net/ 00:05:33.070 00:05:33.070 00:05:33.070 Suite: memory 00:05:33.070 Test: test ... 00:05:33.070 register 0x200000200000 2097152 00:05:33.070 malloc 3145728 00:05:33.070 register 0x200000400000 4194304 00:05:33.070 buf 0x200000500000 len 3145728 PASSED 00:05:33.070 malloc 64 00:05:33.070 buf 0x2000004fff40 len 64 PASSED 00:05:33.070 malloc 4194304 00:05:33.070 register 0x200000800000 6291456 00:05:33.070 buf 0x200000a00000 len 4194304 PASSED 00:05:33.070 free 0x200000500000 3145728 00:05:33.070 free 0x2000004fff40 64 00:05:33.070 unregister 0x200000400000 4194304 PASSED 00:05:33.070 free 0x200000a00000 4194304 00:05:33.070 unregister 0x200000800000 6291456 PASSED 00:05:33.070 malloc 8388608 00:05:33.070 register 0x200000400000 10485760 00:05:33.070 buf 0x200000600000 len 8388608 PASSED 00:05:33.070 free 0x200000600000 8388608 00:05:33.070 unregister 0x200000400000 10485760 PASSED 00:05:33.070 passed 00:05:33.070 00:05:33.070 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.070 suites 1 1 n/a 0 0 00:05:33.070 tests 1 1 1 0 0 00:05:33.070 asserts 15 15 15 0 n/a 00:05:33.070 00:05:33.070 Elapsed time = 0.009 seconds 00:05:33.070 00:05:33.070 real 0m0.152s 00:05:33.070 user 0m0.018s 00:05:33.070 sys 0m0.032s 00:05:33.070 14:43:47 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.070 ************************************ 00:05:33.070 END TEST env_mem_callbacks 00:05:33.070 14:43:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 ************************************ 00:05:33.070 00:05:33.070 real 0m2.627s 00:05:33.070 user 0m1.347s 00:05:33.070 sys 0m0.911s 00:05:33.070 14:43:47 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.070 14:43:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 ************************************ 00:05:33.070 END TEST env 00:05:33.070 ************************************ 00:05:33.070 14:43:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.070 14:43:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.070 14:43:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.070 14:43:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 ************************************ 00:05:33.071 START TEST rpc 00:05:33.071 ************************************ 00:05:33.071 14:43:47 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.382 * Looking for test storage... 00:05:33.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.382 14:43:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.382 14:43:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.382 14:43:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.382 14:43:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.382 14:43:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.382 14:43:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.382 14:43:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.382 14:43:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.382 14:43:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.382 14:43:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.382 14:43:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.382 14:43:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.382 14:43:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.382 14:43:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.382 14:43:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.382 --rc genhtml_branch_coverage=1 00:05:33.382 --rc genhtml_function_coverage=1 00:05:33.382 --rc genhtml_legend=1 00:05:33.382 --rc geninfo_all_blocks=1 00:05:33.382 --rc geninfo_unexecuted_blocks=1 00:05:33.382 00:05:33.382 ' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.382 --rc genhtml_branch_coverage=1 00:05:33.382 --rc genhtml_function_coverage=1 00:05:33.382 --rc genhtml_legend=1 00:05:33.382 --rc geninfo_all_blocks=1 00:05:33.382 --rc geninfo_unexecuted_blocks=1 00:05:33.382 00:05:33.382 ' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.382 --rc genhtml_branch_coverage=1 00:05:33.382 --rc genhtml_function_coverage=1 00:05:33.382 --rc genhtml_legend=1 00:05:33.382 --rc geninfo_all_blocks=1 00:05:33.382 --rc geninfo_unexecuted_blocks=1 00:05:33.382 00:05:33.382 ' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.382 --rc genhtml_branch_coverage=1 00:05:33.382 --rc genhtml_function_coverage=1 00:05:33.382 --rc genhtml_legend=1 00:05:33.382 --rc geninfo_all_blocks=1 00:05:33.382 --rc geninfo_unexecuted_blocks=1 00:05:33.382 00:05:33.382 ' 00:05:33.382 14:43:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56870 00:05:33.382 14:43:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.382 14:43:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:33.382 14:43:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56870 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 56870 ']' 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.382 14:43:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.382 [2024-11-22 14:43:47.943386] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:33.382 [2024-11-22 14:43:47.943524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56870 ] 00:05:33.656 [2024-11-22 14:43:48.092927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.656 [2024-11-22 14:43:48.157269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:33.656 [2024-11-22 14:43:48.157343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56870' to capture a snapshot of events at runtime. 00:05:33.656 [2024-11-22 14:43:48.157369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.656 [2024-11-22 14:43:48.157377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.656 [2024-11-22 14:43:48.157394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56870 for offline analysis/debug. 00:05:33.656 [2024-11-22 14:43:48.157870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.656 [2024-11-22 14:43:48.229858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.916 14:43:48 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.916 14:43:48 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.916 14:43:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.916 14:43:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.916 14:43:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:33.916 14:43:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:33.916 14:43:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.916 14:43:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.916 14:43:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.916 ************************************ 00:05:33.916 START TEST rpc_integrity 00:05:33.916 ************************************ 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.916 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.916 { 00:05:33.916 "name": "Malloc0", 00:05:33.916 "aliases": [ 00:05:33.916 "ac8b3b2c-869f-47ee-b92c-7a90ec5d4d54" 00:05:33.916 ], 00:05:33.916 "product_name": "Malloc disk", 00:05:33.916 "block_size": 512, 00:05:33.916 "num_blocks": 16384, 00:05:33.916 "uuid": "ac8b3b2c-869f-47ee-b92c-7a90ec5d4d54", 00:05:33.916 "assigned_rate_limits": { 00:05:33.916 "rw_ios_per_sec": 0, 00:05:33.916 "rw_mbytes_per_sec": 0, 00:05:33.916 "r_mbytes_per_sec": 0, 00:05:33.916 "w_mbytes_per_sec": 0 00:05:33.916 }, 00:05:33.916 "claimed": false, 00:05:33.916 "zoned": false, 00:05:33.916 "supported_io_types": { 00:05:33.916 "read": true, 00:05:33.916 "write": true, 00:05:33.916 "unmap": true, 00:05:33.916 "flush": true, 00:05:33.916 "reset": true, 00:05:33.916 "nvme_admin": false, 00:05:33.916 "nvme_io": false, 00:05:33.916 "nvme_io_md": false, 00:05:33.916 "write_zeroes": true, 00:05:33.916 "zcopy": true, 00:05:33.916 "get_zone_info": false, 00:05:33.916 "zone_management": false, 00:05:33.916 "zone_append": false, 00:05:33.916 "compare": false, 00:05:33.916 "compare_and_write": false, 00:05:33.916 "abort": true, 00:05:33.916 "seek_hole": false, 00:05:33.916 "seek_data": false, 00:05:33.916 "copy": true, 00:05:33.916 "nvme_iov_md": false 00:05:33.916 }, 00:05:33.916 "memory_domains": [ 00:05:33.916 { 00:05:33.916 "dma_device_id": "system", 00:05:33.916 "dma_device_type": 1 00:05:33.916 }, 00:05:33.916 { 00:05:33.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.916 "dma_device_type": 2 00:05:33.916 } 00:05:33.916 ], 00:05:33.916 "driver_specific": {} 00:05:33.916 } 00:05:33.916 ]' 00:05:33.916 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 [2024-11-22 14:43:48.621600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.174 [2024-11-22 14:43:48.621668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.174 [2024-11-22 14:43:48.621689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ffff20 00:05:34.174 [2024-11-22 14:43:48.621699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.174 [2024-11-22 14:43:48.623501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.174 [2024-11-22 14:43:48.623536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.174 Passthru0 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.174 { 00:05:34.174 "name": "Malloc0", 00:05:34.174 "aliases": [ 00:05:34.174 "ac8b3b2c-869f-47ee-b92c-7a90ec5d4d54" 00:05:34.174 ], 00:05:34.174 "product_name": "Malloc disk", 00:05:34.174 "block_size": 512, 00:05:34.174 "num_blocks": 16384, 00:05:34.174 "uuid": "ac8b3b2c-869f-47ee-b92c-7a90ec5d4d54", 00:05:34.174 "assigned_rate_limits": { 00:05:34.174 "rw_ios_per_sec": 0, 00:05:34.174 "rw_mbytes_per_sec": 0, 00:05:34.174 "r_mbytes_per_sec": 0, 00:05:34.174 "w_mbytes_per_sec": 0 00:05:34.174 }, 00:05:34.174 "claimed": true, 00:05:34.174 "claim_type": "exclusive_write", 00:05:34.174 "zoned": false, 00:05:34.174 "supported_io_types": { 00:05:34.174 "read": true, 00:05:34.174 "write": true, 00:05:34.174 "unmap": true, 00:05:34.174 "flush": true, 00:05:34.174 "reset": true, 00:05:34.174 "nvme_admin": false, 00:05:34.174 "nvme_io": false, 00:05:34.174 "nvme_io_md": false, 00:05:34.174 "write_zeroes": true, 00:05:34.174 "zcopy": true, 00:05:34.174 "get_zone_info": false, 00:05:34.174 "zone_management": false, 00:05:34.174 "zone_append": false, 00:05:34.174 "compare": false, 00:05:34.174 "compare_and_write": false, 00:05:34.174 "abort": true, 00:05:34.174 "seek_hole": false, 00:05:34.174 "seek_data": false, 00:05:34.174 "copy": true, 00:05:34.174 "nvme_iov_md": false 00:05:34.174 }, 00:05:34.174 "memory_domains": [ 00:05:34.174 { 00:05:34.174 "dma_device_id": "system", 00:05:34.174 "dma_device_type": 1 00:05:34.174 }, 00:05:34.174 { 00:05:34.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.174 "dma_device_type": 2 00:05:34.174 } 00:05:34.174 ], 00:05:34.174 "driver_specific": {} 00:05:34.174 }, 00:05:34.174 { 00:05:34.174 "name": "Passthru0", 00:05:34.174 "aliases": [ 00:05:34.174 "682ad564-edb2-5e7f-9d87-23549250c2c7" 00:05:34.174 ], 00:05:34.174 "product_name": "passthru", 00:05:34.174 "block_size": 512, 00:05:34.174 "num_blocks": 16384, 00:05:34.174 "uuid": "682ad564-edb2-5e7f-9d87-23549250c2c7", 00:05:34.174 "assigned_rate_limits": { 00:05:34.174 "rw_ios_per_sec": 0, 00:05:34.174 "rw_mbytes_per_sec": 0, 00:05:34.174 "r_mbytes_per_sec": 0, 00:05:34.174 "w_mbytes_per_sec": 0 00:05:34.174 }, 00:05:34.174 "claimed": false, 00:05:34.174 "zoned": false, 00:05:34.174 "supported_io_types": { 00:05:34.174 "read": true, 00:05:34.174 "write": true, 00:05:34.174 "unmap": true, 00:05:34.174 "flush": true, 00:05:34.174 "reset": true, 00:05:34.174 "nvme_admin": false, 00:05:34.174 "nvme_io": false, 00:05:34.174 "nvme_io_md": false, 00:05:34.174 "write_zeroes": true, 00:05:34.174 "zcopy": true, 00:05:34.174 "get_zone_info": false, 00:05:34.174 "zone_management": false, 00:05:34.174 "zone_append": false, 00:05:34.174 "compare": false, 00:05:34.174 "compare_and_write": false, 00:05:34.174 "abort": true, 00:05:34.174 "seek_hole": false, 00:05:34.174 "seek_data": false, 00:05:34.174 "copy": true, 00:05:34.174 "nvme_iov_md": false 00:05:34.174 }, 00:05:34.174 "memory_domains": [ 00:05:34.174 { 00:05:34.174 "dma_device_id": "system", 00:05:34.174 "dma_device_type": 1 00:05:34.174 }, 00:05:34.174 { 00:05:34.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.174 "dma_device_type": 2 00:05:34.174 } 00:05:34.174 ], 00:05:34.174 "driver_specific": { 00:05:34.174 "passthru": { 00:05:34.174 "name": "Passthru0", 00:05:34.174 "base_bdev_name": "Malloc0" 00:05:34.174 } 00:05:34.174 } 00:05:34.174 } 00:05:34.174 ]' 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.174 ************************************ 00:05:34.174 END TEST rpc_integrity 00:05:34.174 ************************************ 00:05:34.174 14:43:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.174 00:05:34.174 real 0m0.347s 00:05:34.174 user 0m0.234s 00:05:34.174 sys 0m0.039s 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.174 14:43:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.432 14:43:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.432 14:43:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.432 14:43:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.432 14:43:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.432 ************************************ 00:05:34.432 START TEST rpc_plugins 00:05:34.432 ************************************ 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.432 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.432 { 00:05:34.432 "name": "Malloc1", 00:05:34.432 "aliases": [ 00:05:34.432 "676b9bcb-304c-46a8-8a61-a7fdc7d2028e" 00:05:34.432 ], 00:05:34.432 "product_name": "Malloc disk", 00:05:34.432 "block_size": 4096, 00:05:34.432 "num_blocks": 256, 00:05:34.432 "uuid": "676b9bcb-304c-46a8-8a61-a7fdc7d2028e", 00:05:34.432 "assigned_rate_limits": { 00:05:34.432 "rw_ios_per_sec": 0, 00:05:34.432 "rw_mbytes_per_sec": 0, 00:05:34.432 "r_mbytes_per_sec": 0, 00:05:34.432 "w_mbytes_per_sec": 0 00:05:34.432 }, 00:05:34.432 "claimed": false, 00:05:34.432 "zoned": false, 00:05:34.432 "supported_io_types": { 00:05:34.432 "read": true, 00:05:34.432 "write": true, 00:05:34.432 "unmap": true, 00:05:34.432 "flush": true, 00:05:34.432 "reset": true, 00:05:34.432 "nvme_admin": false, 00:05:34.432 "nvme_io": false, 00:05:34.432 "nvme_io_md": false, 00:05:34.432 "write_zeroes": true, 00:05:34.432 "zcopy": true, 00:05:34.432 "get_zone_info": false, 00:05:34.432 "zone_management": false, 00:05:34.432 "zone_append": false, 00:05:34.432 "compare": false, 00:05:34.432 "compare_and_write": false, 00:05:34.432 "abort": true, 00:05:34.432 "seek_hole": false, 00:05:34.432 "seek_data": false, 00:05:34.432 "copy": true, 00:05:34.432 "nvme_iov_md": false 00:05:34.432 }, 00:05:34.432 "memory_domains": [ 00:05:34.432 { 00:05:34.432 "dma_device_id": "system", 00:05:34.432 "dma_device_type": 1 00:05:34.432 }, 00:05:34.432 { 00:05:34.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.432 "dma_device_type": 2 00:05:34.432 } 00:05:34.432 ], 00:05:34.432 "driver_specific": {} 00:05:34.432 } 00:05:34.432 ]' 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.432 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.433 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 14:43:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.433 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.433 14:43:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.433 ************************************ 00:05:34.433 END TEST rpc_plugins 00:05:34.433 ************************************ 00:05:34.433 14:43:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.433 00:05:34.433 real 0m0.177s 00:05:34.433 user 0m0.110s 00:05:34.433 sys 0m0.026s 00:05:34.433 14:43:49 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.433 14:43:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 14:43:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:34.433 14:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.433 14:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.433 14:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.433 ************************************ 00:05:34.433 START TEST rpc_trace_cmd_test 00:05:34.433 ************************************ 00:05:34.433 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:34.433 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:34.433 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:34.433 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.433 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:34.690 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56870", 00:05:34.690 "tpoint_group_mask": "0x8", 00:05:34.690 "iscsi_conn": { 00:05:34.690 "mask": "0x2", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "scsi": { 00:05:34.690 "mask": "0x4", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "bdev": { 00:05:34.690 "mask": "0x8", 00:05:34.690 "tpoint_mask": "0xffffffffffffffff" 00:05:34.690 }, 00:05:34.690 "nvmf_rdma": { 00:05:34.690 "mask": "0x10", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "nvmf_tcp": { 00:05:34.690 "mask": "0x20", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "ftl": { 00:05:34.690 "mask": "0x40", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "blobfs": { 00:05:34.690 "mask": "0x80", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "dsa": { 00:05:34.690 "mask": "0x200", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "thread": { 00:05:34.690 "mask": "0x400", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "nvme_pcie": { 00:05:34.690 "mask": "0x800", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "iaa": { 00:05:34.690 "mask": "0x1000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "nvme_tcp": { 00:05:34.690 "mask": "0x2000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "bdev_nvme": { 00:05:34.690 "mask": "0x4000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "sock": { 00:05:34.690 "mask": "0x8000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "blob": { 00:05:34.690 "mask": "0x10000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "bdev_raid": { 00:05:34.690 "mask": "0x20000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 }, 00:05:34.690 "scheduler": { 00:05:34.690 "mask": "0x40000", 00:05:34.690 "tpoint_mask": "0x0" 00:05:34.690 } 00:05:34.690 }' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:34.690 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:34.947 ************************************ 00:05:34.947 END TEST rpc_trace_cmd_test 00:05:34.947 ************************************ 00:05:34.947 14:43:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:34.947 00:05:34.947 real 0m0.288s 00:05:34.947 user 0m0.245s 00:05:34.947 sys 0m0.031s 00:05:34.947 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.947 14:43:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 14:43:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:34.947 14:43:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:34.947 14:43:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:34.947 14:43:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.947 14:43:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.947 14:43:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 ************************************ 00:05:34.947 START TEST rpc_daemon_integrity 00:05:34.947 ************************************ 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.947 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.947 { 00:05:34.947 "name": "Malloc2", 00:05:34.947 "aliases": [ 00:05:34.947 "9c03aaf7-ec31-423a-9207-1c9dbf31489c" 00:05:34.947 ], 00:05:34.947 "product_name": "Malloc disk", 00:05:34.947 "block_size": 512, 00:05:34.947 "num_blocks": 16384, 00:05:34.947 "uuid": "9c03aaf7-ec31-423a-9207-1c9dbf31489c", 00:05:34.947 "assigned_rate_limits": { 00:05:34.947 "rw_ios_per_sec": 0, 00:05:34.947 "rw_mbytes_per_sec": 0, 00:05:34.947 "r_mbytes_per_sec": 0, 00:05:34.947 "w_mbytes_per_sec": 0 00:05:34.947 }, 00:05:34.947 "claimed": false, 00:05:34.947 "zoned": false, 00:05:34.947 "supported_io_types": { 00:05:34.947 "read": true, 00:05:34.947 "write": true, 00:05:34.947 "unmap": true, 00:05:34.948 "flush": true, 00:05:34.948 "reset": true, 00:05:34.948 "nvme_admin": false, 00:05:34.948 "nvme_io": false, 00:05:34.948 "nvme_io_md": false, 00:05:34.948 "write_zeroes": true, 00:05:34.948 "zcopy": true, 00:05:34.948 "get_zone_info": false, 00:05:34.948 "zone_management": false, 00:05:34.948 "zone_append": false, 00:05:34.948 "compare": false, 00:05:34.948 "compare_and_write": false, 00:05:34.948 "abort": true, 00:05:34.948 "seek_hole": false, 00:05:34.948 "seek_data": false, 00:05:34.948 "copy": true, 00:05:34.948 "nvme_iov_md": false 00:05:34.948 }, 00:05:34.948 "memory_domains": [ 00:05:34.948 { 00:05:34.948 "dma_device_id": "system", 00:05:34.948 "dma_device_type": 1 00:05:34.948 }, 00:05:34.948 { 00:05:34.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.948 "dma_device_type": 2 00:05:34.948 } 00:05:34.948 ], 00:05:34.948 "driver_specific": {} 00:05:34.948 } 00:05:34.948 ]' 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.948 [2024-11-22 14:43:49.594770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.948 [2024-11-22 14:43:49.594832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.948 [2024-11-22 14:43:49.594850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2193870 00:05:34.948 [2024-11-22 14:43:49.594858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.948 [2024-11-22 14:43:49.596149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.948 [2024-11-22 14:43:49.596186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.948 Passthru0 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.948 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.206 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.206 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.206 { 00:05:35.206 "name": "Malloc2", 00:05:35.206 "aliases": [ 00:05:35.206 "9c03aaf7-ec31-423a-9207-1c9dbf31489c" 00:05:35.206 ], 00:05:35.206 "product_name": "Malloc disk", 00:05:35.206 "block_size": 512, 00:05:35.206 "num_blocks": 16384, 00:05:35.206 "uuid": "9c03aaf7-ec31-423a-9207-1c9dbf31489c", 00:05:35.206 "assigned_rate_limits": { 00:05:35.206 "rw_ios_per_sec": 0, 00:05:35.206 "rw_mbytes_per_sec": 0, 00:05:35.206 "r_mbytes_per_sec": 0, 00:05:35.206 "w_mbytes_per_sec": 0 00:05:35.206 }, 00:05:35.206 "claimed": true, 00:05:35.206 "claim_type": "exclusive_write", 00:05:35.206 "zoned": false, 00:05:35.206 "supported_io_types": { 00:05:35.206 "read": true, 00:05:35.206 "write": true, 00:05:35.206 "unmap": true, 00:05:35.206 "flush": true, 00:05:35.206 "reset": true, 00:05:35.206 "nvme_admin": false, 00:05:35.206 "nvme_io": false, 00:05:35.206 "nvme_io_md": false, 00:05:35.206 "write_zeroes": true, 00:05:35.206 "zcopy": true, 00:05:35.206 "get_zone_info": false, 00:05:35.206 "zone_management": false, 00:05:35.206 "zone_append": false, 00:05:35.206 "compare": false, 00:05:35.206 "compare_and_write": false, 00:05:35.206 "abort": true, 00:05:35.206 "seek_hole": false, 00:05:35.206 "seek_data": false, 00:05:35.206 "copy": true, 00:05:35.206 "nvme_iov_md": false 00:05:35.206 }, 00:05:35.206 "memory_domains": [ 00:05:35.206 { 00:05:35.206 "dma_device_id": "system", 00:05:35.206 "dma_device_type": 1 00:05:35.206 }, 00:05:35.206 { 00:05:35.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.206 "dma_device_type": 2 00:05:35.206 } 00:05:35.206 ], 00:05:35.206 "driver_specific": {} 00:05:35.206 }, 00:05:35.206 { 00:05:35.207 "name": "Passthru0", 00:05:35.207 "aliases": [ 00:05:35.207 "0d4386ba-4b8d-5178-9b3a-32a37fc6fdde" 00:05:35.207 ], 00:05:35.207 "product_name": "passthru", 00:05:35.207 "block_size": 512, 00:05:35.207 "num_blocks": 16384, 00:05:35.207 "uuid": "0d4386ba-4b8d-5178-9b3a-32a37fc6fdde", 00:05:35.207 "assigned_rate_limits": { 00:05:35.207 "rw_ios_per_sec": 0, 00:05:35.207 "rw_mbytes_per_sec": 0, 00:05:35.207 "r_mbytes_per_sec": 0, 00:05:35.207 "w_mbytes_per_sec": 0 00:05:35.207 }, 00:05:35.207 "claimed": false, 00:05:35.207 "zoned": false, 00:05:35.207 "supported_io_types": { 00:05:35.207 "read": true, 00:05:35.207 "write": true, 00:05:35.207 "unmap": true, 00:05:35.207 "flush": true, 00:05:35.207 "reset": true, 00:05:35.207 "nvme_admin": false, 00:05:35.207 "nvme_io": false, 00:05:35.207 "nvme_io_md": false, 00:05:35.207 "write_zeroes": true, 00:05:35.207 "zcopy": true, 00:05:35.207 "get_zone_info": false, 00:05:35.207 "zone_management": false, 00:05:35.207 "zone_append": false, 00:05:35.207 "compare": false, 00:05:35.207 "compare_and_write": false, 00:05:35.207 "abort": true, 00:05:35.207 "seek_hole": false, 00:05:35.207 "seek_data": false, 00:05:35.207 "copy": true, 00:05:35.207 "nvme_iov_md": false 00:05:35.207 }, 00:05:35.207 "memory_domains": [ 00:05:35.207 { 00:05:35.207 "dma_device_id": "system", 00:05:35.207 "dma_device_type": 1 00:05:35.207 }, 00:05:35.207 { 00:05:35.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.207 "dma_device_type": 2 00:05:35.207 } 00:05:35.207 ], 00:05:35.207 "driver_specific": { 00:05:35.207 "passthru": { 00:05:35.207 "name": "Passthru0", 00:05:35.207 "base_bdev_name": "Malloc2" 00:05:35.207 } 00:05:35.207 } 00:05:35.207 } 00:05:35.207 ]' 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.207 ************************************ 00:05:35.207 END TEST rpc_daemon_integrity 00:05:35.207 ************************************ 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.207 00:05:35.207 real 0m0.336s 00:05:35.207 user 0m0.218s 00:05:35.207 sys 0m0.046s 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.207 14:43:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 14:43:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.207 14:43:49 rpc -- rpc/rpc.sh@84 -- # killprocess 56870 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 56870 ']' 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@958 -- # kill -0 56870 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56870 00:05:35.207 killing process with pid 56870 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56870' 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@973 -- # kill 56870 00:05:35.207 14:43:49 rpc -- common/autotest_common.sh@978 -- # wait 56870 00:05:35.773 00:05:35.773 real 0m2.600s 00:05:35.773 user 0m3.312s 00:05:35.773 sys 0m0.702s 00:05:35.773 ************************************ 00:05:35.773 END TEST rpc 00:05:35.773 ************************************ 00:05:35.773 14:43:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.773 14:43:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.773 14:43:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:35.773 14:43:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.773 14:43:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.773 14:43:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.773 ************************************ 00:05:35.773 START TEST skip_rpc 00:05:35.773 ************************************ 00:05:35.773 14:43:50 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:35.773 * Looking for test storage... 00:05:35.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.773 14:43:50 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.773 14:43:50 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.773 14:43:50 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.032 14:43:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.032 --rc genhtml_branch_coverage=1 00:05:36.032 --rc genhtml_function_coverage=1 00:05:36.032 --rc genhtml_legend=1 00:05:36.032 --rc geninfo_all_blocks=1 00:05:36.032 --rc geninfo_unexecuted_blocks=1 00:05:36.032 00:05:36.032 ' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.032 --rc genhtml_branch_coverage=1 00:05:36.032 --rc genhtml_function_coverage=1 00:05:36.032 --rc genhtml_legend=1 00:05:36.032 --rc geninfo_all_blocks=1 00:05:36.032 --rc geninfo_unexecuted_blocks=1 00:05:36.032 00:05:36.032 ' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.032 --rc genhtml_branch_coverage=1 00:05:36.032 --rc genhtml_function_coverage=1 00:05:36.032 --rc genhtml_legend=1 00:05:36.032 --rc geninfo_all_blocks=1 00:05:36.032 --rc geninfo_unexecuted_blocks=1 00:05:36.032 00:05:36.032 ' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.032 --rc genhtml_branch_coverage=1 00:05:36.032 --rc genhtml_function_coverage=1 00:05:36.032 --rc genhtml_legend=1 00:05:36.032 --rc geninfo_all_blocks=1 00:05:36.032 --rc geninfo_unexecuted_blocks=1 00:05:36.032 00:05:36.032 ' 00:05:36.032 14:43:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:36.032 14:43:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:36.032 14:43:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.032 14:43:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.032 ************************************ 00:05:36.032 START TEST skip_rpc 00:05:36.032 ************************************ 00:05:36.032 14:43:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:36.032 14:43:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57074 00:05:36.032 14:43:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.033 14:43:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.033 14:43:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.033 [2024-11-22 14:43:50.577602] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:36.033 [2024-11-22 14:43:50.577950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57074 ] 00:05:36.291 [2024-11-22 14:43:50.717195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.291 [2024-11-22 14:43:50.782034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.291 [2024-11-22 14:43:50.861591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.561 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57074 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57074 ']' 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57074 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57074 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57074' 00:05:41.562 killing process with pid 57074 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57074 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57074 00:05:41.562 00:05:41.562 ************************************ 00:05:41.562 END TEST skip_rpc 00:05:41.562 ************************************ 00:05:41.562 real 0m5.470s 00:05:41.562 user 0m5.093s 00:05:41.562 sys 0m0.294s 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.562 14:43:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.562 14:43:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.562 14:43:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.562 14:43:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.562 14:43:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.562 ************************************ 00:05:41.562 START TEST skip_rpc_with_json 00:05:41.562 ************************************ 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57155 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57155 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57155 ']' 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.562 14:43:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.562 [2024-11-22 14:43:56.116957] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:41.562 [2024-11-22 14:43:56.117428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57155 ] 00:05:41.821 [2024-11-22 14:43:56.264209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.821 [2024-11-22 14:43:56.328069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.821 [2024-11-22 14:43:56.407324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.757 [2024-11-22 14:43:57.121641] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.757 request: 00:05:42.757 { 00:05:42.757 "trtype": "tcp", 00:05:42.757 "method": "nvmf_get_transports", 00:05:42.757 "req_id": 1 00:05:42.757 } 00:05:42.757 Got JSON-RPC error response 00:05:42.757 response: 00:05:42.757 { 00:05:42.757 "code": -19, 00:05:42.757 "message": "No such device" 00:05:42.757 } 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.757 [2024-11-22 14:43:57.133756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.757 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.757 { 00:05:42.757 "subsystems": [ 00:05:42.757 { 00:05:42.757 "subsystem": "fsdev", 00:05:42.757 "config": [ 00:05:42.757 { 00:05:42.757 "method": "fsdev_set_opts", 00:05:42.757 "params": { 00:05:42.757 "fsdev_io_pool_size": 65535, 00:05:42.757 "fsdev_io_cache_size": 256 00:05:42.757 } 00:05:42.757 } 00:05:42.757 ] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "keyring", 00:05:42.757 "config": [] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "iobuf", 00:05:42.757 "config": [ 00:05:42.757 { 00:05:42.757 "method": "iobuf_set_options", 00:05:42.757 "params": { 00:05:42.757 "small_pool_count": 8192, 00:05:42.757 "large_pool_count": 1024, 00:05:42.757 "small_bufsize": 8192, 00:05:42.757 "large_bufsize": 135168, 00:05:42.757 "enable_numa": false 00:05:42.757 } 00:05:42.757 } 00:05:42.757 ] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "sock", 00:05:42.757 "config": [ 00:05:42.757 { 00:05:42.757 "method": "sock_set_default_impl", 00:05:42.757 "params": { 00:05:42.757 "impl_name": "uring" 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "sock_impl_set_options", 00:05:42.757 "params": { 00:05:42.757 "impl_name": "ssl", 00:05:42.757 "recv_buf_size": 4096, 00:05:42.757 "send_buf_size": 4096, 00:05:42.757 "enable_recv_pipe": true, 00:05:42.757 "enable_quickack": false, 00:05:42.757 "enable_placement_id": 0, 00:05:42.757 "enable_zerocopy_send_server": true, 00:05:42.757 "enable_zerocopy_send_client": false, 00:05:42.757 "zerocopy_threshold": 0, 00:05:42.757 "tls_version": 0, 00:05:42.757 "enable_ktls": false 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "sock_impl_set_options", 00:05:42.757 "params": { 00:05:42.757 "impl_name": "posix", 00:05:42.757 "recv_buf_size": 2097152, 00:05:42.757 "send_buf_size": 2097152, 00:05:42.757 "enable_recv_pipe": true, 00:05:42.757 "enable_quickack": false, 00:05:42.757 "enable_placement_id": 0, 00:05:42.757 "enable_zerocopy_send_server": true, 00:05:42.757 "enable_zerocopy_send_client": false, 00:05:42.757 "zerocopy_threshold": 0, 00:05:42.757 "tls_version": 0, 00:05:42.757 "enable_ktls": false 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "sock_impl_set_options", 00:05:42.757 "params": { 00:05:42.757 "impl_name": "uring", 00:05:42.757 "recv_buf_size": 2097152, 00:05:42.757 "send_buf_size": 2097152, 00:05:42.757 "enable_recv_pipe": true, 00:05:42.757 "enable_quickack": false, 00:05:42.757 "enable_placement_id": 0, 00:05:42.757 "enable_zerocopy_send_server": false, 00:05:42.757 "enable_zerocopy_send_client": false, 00:05:42.757 "zerocopy_threshold": 0, 00:05:42.757 "tls_version": 0, 00:05:42.757 "enable_ktls": false 00:05:42.757 } 00:05:42.757 } 00:05:42.757 ] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "vmd", 00:05:42.757 "config": [] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "accel", 00:05:42.757 "config": [ 00:05:42.757 { 00:05:42.757 "method": "accel_set_options", 00:05:42.757 "params": { 00:05:42.757 "small_cache_size": 128, 00:05:42.757 "large_cache_size": 16, 00:05:42.757 "task_count": 2048, 00:05:42.757 "sequence_count": 2048, 00:05:42.757 "buf_count": 2048 00:05:42.757 } 00:05:42.757 } 00:05:42.757 ] 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "subsystem": "bdev", 00:05:42.757 "config": [ 00:05:42.757 { 00:05:42.757 "method": "bdev_set_options", 00:05:42.757 "params": { 00:05:42.757 "bdev_io_pool_size": 65535, 00:05:42.757 "bdev_io_cache_size": 256, 00:05:42.757 "bdev_auto_examine": true, 00:05:42.757 "iobuf_small_cache_size": 128, 00:05:42.757 "iobuf_large_cache_size": 16 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "bdev_raid_set_options", 00:05:42.757 "params": { 00:05:42.757 "process_window_size_kb": 1024, 00:05:42.757 "process_max_bandwidth_mb_sec": 0 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "bdev_iscsi_set_options", 00:05:42.757 "params": { 00:05:42.757 "timeout_sec": 30 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "method": "bdev_nvme_set_options", 00:05:42.757 "params": { 00:05:42.757 "action_on_timeout": "none", 00:05:42.757 "timeout_us": 0, 00:05:42.757 "timeout_admin_us": 0, 00:05:42.757 "keep_alive_timeout_ms": 10000, 00:05:42.757 "arbitration_burst": 0, 00:05:42.757 "low_priority_weight": 0, 00:05:42.757 "medium_priority_weight": 0, 00:05:42.757 "high_priority_weight": 0, 00:05:42.757 "nvme_adminq_poll_period_us": 10000, 00:05:42.757 "nvme_ioq_poll_period_us": 0, 00:05:42.757 "io_queue_requests": 0, 00:05:42.757 "delay_cmd_submit": true, 00:05:42.757 "transport_retry_count": 4, 00:05:42.757 "bdev_retry_count": 3, 00:05:42.757 "transport_ack_timeout": 0, 00:05:42.757 "ctrlr_loss_timeout_sec": 0, 00:05:42.757 "reconnect_delay_sec": 0, 00:05:42.757 "fast_io_fail_timeout_sec": 0, 00:05:42.757 "disable_auto_failback": false, 00:05:42.757 "generate_uuids": false, 00:05:42.757 "transport_tos": 0, 00:05:42.757 "nvme_error_stat": false, 00:05:42.757 "rdma_srq_size": 0, 00:05:42.757 "io_path_stat": false, 00:05:42.757 "allow_accel_sequence": false, 00:05:42.757 "rdma_max_cq_size": 0, 00:05:42.757 "rdma_cm_event_timeout_ms": 0, 00:05:42.757 "dhchap_digests": [ 00:05:42.757 "sha256", 00:05:42.757 "sha384", 00:05:42.757 "sha512" 00:05:42.757 ], 00:05:42.757 "dhchap_dhgroups": [ 00:05:42.757 "null", 00:05:42.757 "ffdhe2048", 00:05:42.757 "ffdhe3072", 00:05:42.757 "ffdhe4096", 00:05:42.757 "ffdhe6144", 00:05:42.757 "ffdhe8192" 00:05:42.757 ] 00:05:42.757 } 00:05:42.757 }, 00:05:42.757 { 00:05:42.758 "method": "bdev_nvme_set_hotplug", 00:05:42.758 "params": { 00:05:42.758 "period_us": 100000, 00:05:42.758 "enable": false 00:05:42.758 } 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "method": "bdev_wait_for_examine" 00:05:42.758 } 00:05:42.758 ] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "scsi", 00:05:42.758 "config": null 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "scheduler", 00:05:42.758 "config": [ 00:05:42.758 { 00:05:42.758 "method": "framework_set_scheduler", 00:05:42.758 "params": { 00:05:42.758 "name": "static" 00:05:42.758 } 00:05:42.758 } 00:05:42.758 ] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "vhost_scsi", 00:05:42.758 "config": [] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "vhost_blk", 00:05:42.758 "config": [] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "ublk", 00:05:42.758 "config": [] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "nbd", 00:05:42.758 "config": [] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "nvmf", 00:05:42.758 "config": [ 00:05:42.758 { 00:05:42.758 "method": "nvmf_set_config", 00:05:42.758 "params": { 00:05:42.758 "discovery_filter": "match_any", 00:05:42.758 "admin_cmd_passthru": { 00:05:42.758 "identify_ctrlr": false 00:05:42.758 }, 00:05:42.758 "dhchap_digests": [ 00:05:42.758 "sha256", 00:05:42.758 "sha384", 00:05:42.758 "sha512" 00:05:42.758 ], 00:05:42.758 "dhchap_dhgroups": [ 00:05:42.758 "null", 00:05:42.758 "ffdhe2048", 00:05:42.758 "ffdhe3072", 00:05:42.758 "ffdhe4096", 00:05:42.758 "ffdhe6144", 00:05:42.758 "ffdhe8192" 00:05:42.758 ] 00:05:42.758 } 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "method": "nvmf_set_max_subsystems", 00:05:42.758 "params": { 00:05:42.758 "max_subsystems": 1024 00:05:42.758 } 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "method": "nvmf_set_crdt", 00:05:42.758 "params": { 00:05:42.758 "crdt1": 0, 00:05:42.758 "crdt2": 0, 00:05:42.758 "crdt3": 0 00:05:42.758 } 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "method": "nvmf_create_transport", 00:05:42.758 "params": { 00:05:42.758 "trtype": "TCP", 00:05:42.758 "max_queue_depth": 128, 00:05:42.758 "max_io_qpairs_per_ctrlr": 127, 00:05:42.758 "in_capsule_data_size": 4096, 00:05:42.758 "max_io_size": 131072, 00:05:42.758 "io_unit_size": 131072, 00:05:42.758 "max_aq_depth": 128, 00:05:42.758 "num_shared_buffers": 511, 00:05:42.758 "buf_cache_size": 4294967295, 00:05:42.758 "dif_insert_or_strip": false, 00:05:42.758 "zcopy": false, 00:05:42.758 "c2h_success": true, 00:05:42.758 "sock_priority": 0, 00:05:42.758 "abort_timeout_sec": 1, 00:05:42.758 "ack_timeout": 0, 00:05:42.758 "data_wr_pool_size": 0 00:05:42.758 } 00:05:42.758 } 00:05:42.758 ] 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "subsystem": "iscsi", 00:05:42.758 "config": [ 00:05:42.758 { 00:05:42.758 "method": "iscsi_set_options", 00:05:42.758 "params": { 00:05:42.758 "node_base": "iqn.2016-06.io.spdk", 00:05:42.758 "max_sessions": 128, 00:05:42.758 "max_connections_per_session": 2, 00:05:42.758 "max_queue_depth": 64, 00:05:42.758 "default_time2wait": 2, 00:05:42.758 "default_time2retain": 20, 00:05:42.758 "first_burst_length": 8192, 00:05:42.758 "immediate_data": true, 00:05:42.758 "allow_duplicated_isid": false, 00:05:42.758 "error_recovery_level": 0, 00:05:42.758 "nop_timeout": 60, 00:05:42.758 "nop_in_interval": 30, 00:05:42.758 "disable_chap": false, 00:05:42.758 "require_chap": false, 00:05:42.758 "mutual_chap": false, 00:05:42.758 "chap_group": 0, 00:05:42.758 "max_large_datain_per_connection": 64, 00:05:42.758 "max_r2t_per_connection": 4, 00:05:42.758 "pdu_pool_size": 36864, 00:05:42.758 "immediate_data_pool_size": 16384, 00:05:42.758 "data_out_pool_size": 2048 00:05:42.758 } 00:05:42.758 } 00:05:42.758 ] 00:05:42.758 } 00:05:42.758 ] 00:05:42.758 } 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57155 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57155 ']' 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57155 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57155 00:05:42.758 killing process with pid 57155 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57155' 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57155 00:05:42.758 14:43:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57155 00:05:43.325 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57184 00:05:43.325 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.325 14:43:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57184 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57184 ']' 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57184 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57184 00:05:48.594 killing process with pid 57184 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57184' 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57184 00:05:48.594 14:44:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57184 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.594 ************************************ 00:05:48.594 END TEST skip_rpc_with_json 00:05:48.594 ************************************ 00:05:48.594 00:05:48.594 real 0m7.156s 00:05:48.594 user 0m6.886s 00:05:48.594 sys 0m0.716s 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.594 14:44:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.594 14:44:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.594 14:44:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.594 14:44:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.594 ************************************ 00:05:48.594 START TEST skip_rpc_with_delay 00:05:48.594 ************************************ 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:48.594 14:44:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.595 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:48.595 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.595 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.595 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.595 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.853 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.853 [2024-11-22 14:44:03.329182] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.854 00:05:48.854 real 0m0.096s 00:05:48.854 user 0m0.063s 00:05:48.854 sys 0m0.032s 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.854 ************************************ 00:05:48.854 END TEST skip_rpc_with_delay 00:05:48.854 ************************************ 00:05:48.854 14:44:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.854 14:44:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.854 14:44:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.854 14:44:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.854 14:44:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.854 14:44:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.854 14:44:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.854 ************************************ 00:05:48.854 START TEST exit_on_failed_rpc_init 00:05:48.854 ************************************ 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57292 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57292 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57292 ']' 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.854 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.854 [2024-11-22 14:44:03.466027] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:48.854 [2024-11-22 14:44:03.466251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57292 ] 00:05:49.113 [2024-11-22 14:44:03.608488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.113 [2024-11-22 14:44:03.652931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.113 [2024-11-22 14:44:03.725034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.372 14:44:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.372 [2024-11-22 14:44:04.005053] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:49.372 [2024-11-22 14:44:04.005166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57303 ] 00:05:49.631 [2024-11-22 14:44:04.158976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.631 [2024-11-22 14:44:04.225666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.631 [2024-11-22 14:44:04.225785] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.631 [2024-11-22 14:44:04.225805] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.631 [2024-11-22 14:44:04.225815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57292 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57292 ']' 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57292 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57292 00:05:49.890 killing process with pid 57292 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57292' 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57292 00:05:49.890 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57292 00:05:50.148 00:05:50.148 real 0m1.309s 00:05:50.148 user 0m1.441s 00:05:50.148 sys 0m0.396s 00:05:50.148 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.148 14:44:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.148 ************************************ 00:05:50.148 END TEST exit_on_failed_rpc_init 00:05:50.148 ************************************ 00:05:50.148 14:44:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.148 ************************************ 00:05:50.148 END TEST skip_rpc 00:05:50.148 ************************************ 00:05:50.148 00:05:50.148 real 0m14.452s 00:05:50.148 user 0m13.662s 00:05:50.148 sys 0m1.668s 00:05:50.148 14:44:04 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.148 14:44:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.407 14:44:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.407 14:44:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.407 14:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.407 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.407 ************************************ 00:05:50.407 START TEST rpc_client 00:05:50.407 ************************************ 00:05:50.407 14:44:04 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.407 * Looking for test storage... 00:05:50.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:50.407 14:44:04 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.407 14:44:04 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.407 14:44:04 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.407 14:44:04 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.407 14:44:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.408 14:44:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:50.408 14:44:04 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.408 14:44:04 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.408 --rc genhtml_branch_coverage=1 00:05:50.408 --rc genhtml_function_coverage=1 00:05:50.408 --rc genhtml_legend=1 00:05:50.408 --rc geninfo_all_blocks=1 00:05:50.408 --rc geninfo_unexecuted_blocks=1 00:05:50.408 00:05:50.408 ' 00:05:50.408 14:44:04 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.408 --rc genhtml_branch_coverage=1 00:05:50.408 --rc genhtml_function_coverage=1 00:05:50.408 --rc genhtml_legend=1 00:05:50.408 --rc geninfo_all_blocks=1 00:05:50.408 --rc geninfo_unexecuted_blocks=1 00:05:50.408 00:05:50.408 ' 00:05:50.408 14:44:04 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.408 --rc genhtml_branch_coverage=1 00:05:50.408 --rc genhtml_function_coverage=1 00:05:50.408 --rc genhtml_legend=1 00:05:50.408 --rc geninfo_all_blocks=1 00:05:50.408 --rc geninfo_unexecuted_blocks=1 00:05:50.408 00:05:50.408 ' 00:05:50.408 14:44:04 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.408 --rc genhtml_branch_coverage=1 00:05:50.408 --rc genhtml_function_coverage=1 00:05:50.408 --rc genhtml_legend=1 00:05:50.408 --rc geninfo_all_blocks=1 00:05:50.408 --rc geninfo_unexecuted_blocks=1 00:05:50.408 00:05:50.408 ' 00:05:50.408 14:44:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:50.408 OK 00:05:50.408 14:44:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.408 ************************************ 00:05:50.408 END TEST rpc_client 00:05:50.408 ************************************ 00:05:50.408 00:05:50.408 real 0m0.191s 00:05:50.408 user 0m0.104s 00:05:50.408 sys 0m0.096s 00:05:50.408 14:44:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.408 14:44:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.408 14:44:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.408 14:44:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.408 14:44:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.408 14:44:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.408 ************************************ 00:05:50.408 START TEST json_config 00:05:50.408 ************************************ 00:05:50.408 14:44:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.666 14:44:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.666 14:44:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.666 14:44:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.666 14:44:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.666 14:44:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.666 14:44:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:50.666 14:44:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.666 14:44:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.666 14:44:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.666 14:44:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.666 14:44:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.666 14:44:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.666 14:44:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.666 14:44:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.666 14:44:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.666 --rc genhtml_branch_coverage=1 00:05:50.666 --rc genhtml_function_coverage=1 00:05:50.666 --rc genhtml_legend=1 00:05:50.666 --rc geninfo_all_blocks=1 00:05:50.666 --rc geninfo_unexecuted_blocks=1 00:05:50.666 00:05:50.666 ' 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.666 --rc genhtml_branch_coverage=1 00:05:50.666 --rc genhtml_function_coverage=1 00:05:50.666 --rc genhtml_legend=1 00:05:50.666 --rc geninfo_all_blocks=1 00:05:50.666 --rc geninfo_unexecuted_blocks=1 00:05:50.666 00:05:50.666 ' 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.666 --rc genhtml_branch_coverage=1 00:05:50.666 --rc genhtml_function_coverage=1 00:05:50.666 --rc genhtml_legend=1 00:05:50.666 --rc geninfo_all_blocks=1 00:05:50.666 --rc geninfo_unexecuted_blocks=1 00:05:50.666 00:05:50.666 ' 00:05:50.666 14:44:05 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.666 --rc genhtml_branch_coverage=1 00:05:50.666 --rc genhtml_function_coverage=1 00:05:50.666 --rc genhtml_legend=1 00:05:50.666 --rc geninfo_all_blocks=1 00:05:50.666 --rc geninfo_unexecuted_blocks=1 00:05:50.666 00:05:50.666 ' 00:05:50.666 14:44:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.666 14:44:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.666 14:44:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.666 14:44:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.666 14:44:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.666 14:44:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.666 14:44:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.666 14:44:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.666 14:44:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.666 14:44:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.666 14:44:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.667 14:44:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.667 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.667 14:44:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.667 14:44:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.667 14:44:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:50.667 INFO: JSON configuration test init 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.667 14:44:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.667 14:44:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.667 14:44:05 json_config -- json_config/common.sh@10 -- # shift 00:05:50.667 14:44:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.667 14:44:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.667 14:44:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.667 14:44:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.667 14:44:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.667 14:44:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57442 00:05:50.667 14:44:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.667 Waiting for target to run... 00:05:50.667 14:44:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.667 14:44:05 json_config -- json_config/common.sh@25 -- # waitforlisten 57442 /var/tmp/spdk_tgt.sock 00:05:50.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 57442 ']' 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.667 14:44:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.925 [2024-11-22 14:44:05.337895] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:50.925 [2024-11-22 14:44:05.338233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57442 ] 00:05:51.184 [2024-11-22 14:44:05.786524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.184 [2024-11-22 14:44:05.834566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.750 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:51.750 14:44:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.750 14:44:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.750 14:44:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:51.750 14:44:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.008 [2024-11-22 14:44:06.662283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.267 14:44:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.267 14:44:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:52.267 14:44:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.267 14:44:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@54 -- # sort 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:52.833 14:44:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.833 14:44:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:52.833 14:44:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.833 14:44:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:52.833 14:44:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.833 14:44:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.092 MallocForNvmf0 00:05:53.092 14:44:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:53.092 14:44:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:53.351 MallocForNvmf1 00:05:53.351 14:44:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.351 14:44:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.609 [2024-11-22 14:44:08.133505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.609 14:44:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.609 14:44:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.868 14:44:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.868 14:44:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:54.126 14:44:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:54.126 14:44:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:54.384 14:44:08 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:54.384 14:44:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:54.645 [2024-11-22 14:44:09.150106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.645 14:44:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:54.645 14:44:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.646 14:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.646 14:44:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:54.646 14:44:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.646 14:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.646 14:44:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:54.646 14:44:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.646 14:44:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.904 MallocBdevForConfigChangeCheck 00:05:54.904 14:44:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:54.904 14:44:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.904 14:44:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.904 14:44:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:54.904 14:44:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.470 INFO: shutting down applications... 00:05:55.470 14:44:09 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:55.470 14:44:09 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:55.470 14:44:09 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:55.471 14:44:09 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:55.471 14:44:09 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:55.728 Calling clear_iscsi_subsystem 00:05:55.729 Calling clear_nvmf_subsystem 00:05:55.729 Calling clear_nbd_subsystem 00:05:55.729 Calling clear_ublk_subsystem 00:05:55.729 Calling clear_vhost_blk_subsystem 00:05:55.729 Calling clear_vhost_scsi_subsystem 00:05:55.729 Calling clear_bdev_subsystem 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:55.729 14:44:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:56.297 14:44:10 json_config -- json_config/json_config.sh@352 -- # break 00:05:56.297 14:44:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:56.297 14:44:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:56.297 14:44:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:56.297 14:44:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.297 14:44:10 json_config -- json_config/common.sh@35 -- # [[ -n 57442 ]] 00:05:56.298 14:44:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57442 00:05:56.298 14:44:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.298 14:44:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.298 14:44:10 json_config -- json_config/common.sh@41 -- # kill -0 57442 00:05:56.298 14:44:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.871 14:44:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.871 14:44:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.871 SPDK target shutdown done 00:05:56.871 INFO: relaunching applications... 00:05:56.871 14:44:11 json_config -- json_config/common.sh@41 -- # kill -0 57442 00:05:56.871 14:44:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.871 14:44:11 json_config -- json_config/common.sh@43 -- # break 00:05:56.871 14:44:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.871 14:44:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.871 14:44:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:56.871 14:44:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.871 14:44:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.871 14:44:11 json_config -- json_config/common.sh@10 -- # shift 00:05:56.871 14:44:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.871 14:44:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.871 14:44:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.871 14:44:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.871 14:44:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.871 14:44:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57643 00:05:56.871 14:44:11 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.871 14:44:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.871 Waiting for target to run... 00:05:56.871 14:44:11 json_config -- json_config/common.sh@25 -- # waitforlisten 57643 /var/tmp/spdk_tgt.sock 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 57643 ']' 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.871 14:44:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.871 [2024-11-22 14:44:11.352588] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:56.871 [2024-11-22 14:44:11.352687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57643 ] 00:05:57.139 [2024-11-22 14:44:11.786074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.398 [2024-11-22 14:44:11.838826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.398 [2024-11-22 14:44:11.977750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.657 [2024-11-22 14:44:12.197847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.657 [2024-11-22 14:44:12.229965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.916 14:44:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.916 00:05:57.916 INFO: Checking if target configuration is the same... 00:05:57.916 14:44:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:57.916 14:44:12 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.916 14:44:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:57.916 14:44:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:57.916 14:44:12 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.916 14:44:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:57.916 14:44:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.916 + '[' 2 -ne 2 ']' 00:05:57.916 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:57.916 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:57.916 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:57.916 +++ basename /dev/fd/62 00:05:57.916 ++ mktemp /tmp/62.XXX 00:05:57.916 + tmp_file_1=/tmp/62.9s7 00:05:57.916 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.916 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.916 + tmp_file_2=/tmp/spdk_tgt_config.json.CVf 00:05:57.916 + ret=0 00:05:57.916 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:58.204 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:58.205 + diff -u /tmp/62.9s7 /tmp/spdk_tgt_config.json.CVf 00:05:58.205 INFO: JSON config files are the same 00:05:58.205 + echo 'INFO: JSON config files are the same' 00:05:58.205 + rm /tmp/62.9s7 /tmp/spdk_tgt_config.json.CVf 00:05:58.205 + exit 0 00:05:58.205 INFO: changing configuration and checking if this can be detected... 00:05:58.205 14:44:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:58.205 14:44:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:58.205 14:44:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.205 14:44:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.464 14:44:13 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.464 14:44:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:58.464 14:44:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.464 + '[' 2 -ne 2 ']' 00:05:58.464 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:58.464 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:58.464 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:58.464 +++ basename /dev/fd/62 00:05:58.464 ++ mktemp /tmp/62.XXX 00:05:58.464 + tmp_file_1=/tmp/62.yXp 00:05:58.464 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.464 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.464 + tmp_file_2=/tmp/spdk_tgt_config.json.3ll 00:05:58.464 + ret=0 00:05:58.464 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.031 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.031 + diff -u /tmp/62.yXp /tmp/spdk_tgt_config.json.3ll 00:05:59.031 + ret=1 00:05:59.031 + echo '=== Start of file: /tmp/62.yXp ===' 00:05:59.031 + cat /tmp/62.yXp 00:05:59.031 + echo '=== End of file: /tmp/62.yXp ===' 00:05:59.031 + echo '' 00:05:59.031 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3ll ===' 00:05:59.031 + cat /tmp/spdk_tgt_config.json.3ll 00:05:59.031 + echo '=== End of file: /tmp/spdk_tgt_config.json.3ll ===' 00:05:59.031 + echo '' 00:05:59.031 + rm /tmp/62.yXp /tmp/spdk_tgt_config.json.3ll 00:05:59.031 + exit 1 00:05:59.031 INFO: configuration change detected. 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:59.031 14:44:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.031 14:44:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@324 -- # [[ -n 57643 ]] 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:59.031 14:44:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.031 14:44:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:59.031 14:44:13 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:59.032 14:44:13 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:59.032 14:44:13 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:59.032 14:44:13 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:59.032 14:44:13 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.032 14:44:13 json_config -- json_config/json_config.sh@330 -- # killprocess 57643 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@954 -- # '[' -z 57643 ']' 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@958 -- # kill -0 57643 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@959 -- # uname 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.032 14:44:13 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57643 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.291 killing process with pid 57643 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57643' 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@973 -- # kill 57643 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@978 -- # wait 57643 00:05:59.291 14:44:13 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:59.291 14:44:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.291 14:44:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 INFO: Success 00:05:59.551 14:44:13 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:59.551 14:44:13 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:59.551 ************************************ 00:05:59.551 END TEST json_config 00:05:59.551 ************************************ 00:05:59.551 00:05:59.551 real 0m8.925s 00:05:59.551 user 0m12.860s 00:05:59.551 sys 0m1.809s 00:05:59.551 14:44:13 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.551 14:44:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 14:44:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.551 14:44:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.551 14:44:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.551 14:44:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 ************************************ 00:05:59.551 START TEST json_config_extra_key 00:05:59.551 ************************************ 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.551 14:44:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.551 --rc genhtml_branch_coverage=1 00:05:59.551 --rc genhtml_function_coverage=1 00:05:59.551 --rc genhtml_legend=1 00:05:59.551 --rc geninfo_all_blocks=1 00:05:59.551 --rc geninfo_unexecuted_blocks=1 00:05:59.551 00:05:59.551 ' 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.551 --rc genhtml_branch_coverage=1 00:05:59.551 --rc genhtml_function_coverage=1 00:05:59.551 --rc genhtml_legend=1 00:05:59.551 --rc geninfo_all_blocks=1 00:05:59.551 --rc geninfo_unexecuted_blocks=1 00:05:59.551 00:05:59.551 ' 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.551 --rc genhtml_branch_coverage=1 00:05:59.551 --rc genhtml_function_coverage=1 00:05:59.551 --rc genhtml_legend=1 00:05:59.551 --rc geninfo_all_blocks=1 00:05:59.551 --rc geninfo_unexecuted_blocks=1 00:05:59.551 00:05:59.551 ' 00:05:59.551 14:44:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.551 --rc genhtml_branch_coverage=1 00:05:59.551 --rc genhtml_function_coverage=1 00:05:59.551 --rc genhtml_legend=1 00:05:59.551 --rc geninfo_all_blocks=1 00:05:59.551 --rc geninfo_unexecuted_blocks=1 00:05:59.551 00:05:59.551 ' 00:05:59.551 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.551 14:44:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.552 14:44:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.552 14:44:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.552 14:44:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.552 14:44:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.552 14:44:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.552 14:44:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.552 14:44:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.552 14:44:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.552 14:44:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.552 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.552 14:44:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.552 INFO: launching applications... 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.552 14:44:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57792 00:05:59.552 Waiting for target to run... 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57792 /var/tmp/spdk_tgt.sock 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57792 ']' 00:05:59.552 14:44:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.552 14:44:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 [2024-11-22 14:44:14.277659] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:05:59.811 [2024-11-22 14:44:14.277762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57792 ] 00:06:00.070 [2024-11-22 14:44:14.718135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.328 [2024-11-22 14:44:14.771250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.328 [2024-11-22 14:44:14.803481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.896 14:44:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.896 00:06:00.896 14:44:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.896 INFO: shutting down applications... 00:06:00.896 14:44:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.896 14:44:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57792 ]] 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57792 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57792 00:06:00.896 14:44:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57792 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.155 SPDK target shutdown done 00:06:01.155 14:44:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.155 Success 00:06:01.155 14:44:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.155 00:06:01.155 real 0m1.748s 00:06:01.155 user 0m1.674s 00:06:01.155 sys 0m0.458s 00:06:01.155 14:44:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.155 ************************************ 00:06:01.155 END TEST json_config_extra_key 00:06:01.155 ************************************ 00:06:01.155 14:44:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.414 14:44:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.414 14:44:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.414 14:44:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.414 14:44:15 -- common/autotest_common.sh@10 -- # set +x 00:06:01.414 ************************************ 00:06:01.414 START TEST alias_rpc 00:06:01.414 ************************************ 00:06:01.414 14:44:15 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.414 * Looking for test storage... 00:06:01.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:01.414 14:44:15 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.414 14:44:15 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.414 14:44:15 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.414 14:44:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.414 14:44:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.415 --rc genhtml_branch_coverage=1 00:06:01.415 --rc genhtml_function_coverage=1 00:06:01.415 --rc genhtml_legend=1 00:06:01.415 --rc geninfo_all_blocks=1 00:06:01.415 --rc geninfo_unexecuted_blocks=1 00:06:01.415 00:06:01.415 ' 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.415 --rc genhtml_branch_coverage=1 00:06:01.415 --rc genhtml_function_coverage=1 00:06:01.415 --rc genhtml_legend=1 00:06:01.415 --rc geninfo_all_blocks=1 00:06:01.415 --rc geninfo_unexecuted_blocks=1 00:06:01.415 00:06:01.415 ' 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.415 --rc genhtml_branch_coverage=1 00:06:01.415 --rc genhtml_function_coverage=1 00:06:01.415 --rc genhtml_legend=1 00:06:01.415 --rc geninfo_all_blocks=1 00:06:01.415 --rc geninfo_unexecuted_blocks=1 00:06:01.415 00:06:01.415 ' 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.415 --rc genhtml_branch_coverage=1 00:06:01.415 --rc genhtml_function_coverage=1 00:06:01.415 --rc genhtml_legend=1 00:06:01.415 --rc geninfo_all_blocks=1 00:06:01.415 --rc geninfo_unexecuted_blocks=1 00:06:01.415 00:06:01.415 ' 00:06:01.415 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.415 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57870 00:06:01.415 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57870 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57870 ']' 00:06:01.415 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.415 14:44:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.674 [2024-11-22 14:44:16.082482] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:01.674 [2024-11-22 14:44:16.082585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57870 ] 00:06:01.674 [2024-11-22 14:44:16.229173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.674 [2024-11-22 14:44:16.289157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.933 [2024-11-22 14:44:16.362244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.933 14:44:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.933 14:44:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.933 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.501 14:44:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57870 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57870 ']' 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57870 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57870 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.501 killing process with pid 57870 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57870' 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 57870 00:06:02.501 14:44:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 57870 00:06:02.760 00:06:02.760 real 0m1.460s 00:06:02.760 user 0m1.501s 00:06:02.760 sys 0m0.451s 00:06:02.760 14:44:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.760 ************************************ 00:06:02.760 END TEST alias_rpc 00:06:02.760 ************************************ 00:06:02.760 14:44:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.760 14:44:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:02.760 14:44:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.760 14:44:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.760 14:44:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.760 14:44:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.760 ************************************ 00:06:02.760 START TEST spdkcli_tcp 00:06:02.760 ************************************ 00:06:02.760 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:03.020 * Looking for test storage... 00:06:03.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.020 14:44:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.020 --rc genhtml_branch_coverage=1 00:06:03.020 --rc genhtml_function_coverage=1 00:06:03.020 --rc genhtml_legend=1 00:06:03.020 --rc geninfo_all_blocks=1 00:06:03.020 --rc geninfo_unexecuted_blocks=1 00:06:03.020 00:06:03.020 ' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.020 --rc genhtml_branch_coverage=1 00:06:03.020 --rc genhtml_function_coverage=1 00:06:03.020 --rc genhtml_legend=1 00:06:03.020 --rc geninfo_all_blocks=1 00:06:03.020 --rc geninfo_unexecuted_blocks=1 00:06:03.020 00:06:03.020 ' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.020 --rc genhtml_branch_coverage=1 00:06:03.020 --rc genhtml_function_coverage=1 00:06:03.020 --rc genhtml_legend=1 00:06:03.020 --rc geninfo_all_blocks=1 00:06:03.020 --rc geninfo_unexecuted_blocks=1 00:06:03.020 00:06:03.020 ' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.020 --rc genhtml_branch_coverage=1 00:06:03.020 --rc genhtml_function_coverage=1 00:06:03.020 --rc genhtml_legend=1 00:06:03.020 --rc geninfo_all_blocks=1 00:06:03.020 --rc geninfo_unexecuted_blocks=1 00:06:03.020 00:06:03.020 ' 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57946 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:03.020 14:44:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57946 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57946 ']' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.020 14:44:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.020 [2024-11-22 14:44:17.601590] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:03.020 [2024-11-22 14:44:17.601700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57946 ] 00:06:03.279 [2024-11-22 14:44:17.749799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.279 [2024-11-22 14:44:17.813520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.279 [2024-11-22 14:44:17.813529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.279 [2024-11-22 14:44:17.884877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.215 14:44:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.215 14:44:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:04.215 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57963 00:06:04.215 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.215 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.215 [ 00:06:04.215 "bdev_malloc_delete", 00:06:04.215 "bdev_malloc_create", 00:06:04.215 "bdev_null_resize", 00:06:04.215 "bdev_null_delete", 00:06:04.215 "bdev_null_create", 00:06:04.215 "bdev_nvme_cuse_unregister", 00:06:04.215 "bdev_nvme_cuse_register", 00:06:04.215 "bdev_opal_new_user", 00:06:04.215 "bdev_opal_set_lock_state", 00:06:04.215 "bdev_opal_delete", 00:06:04.215 "bdev_opal_get_info", 00:06:04.215 "bdev_opal_create", 00:06:04.215 "bdev_nvme_opal_revert", 00:06:04.215 "bdev_nvme_opal_init", 00:06:04.215 "bdev_nvme_send_cmd", 00:06:04.215 "bdev_nvme_set_keys", 00:06:04.215 "bdev_nvme_get_path_iostat", 00:06:04.215 "bdev_nvme_get_mdns_discovery_info", 00:06:04.215 "bdev_nvme_stop_mdns_discovery", 00:06:04.215 "bdev_nvme_start_mdns_discovery", 00:06:04.215 "bdev_nvme_set_multipath_policy", 00:06:04.215 "bdev_nvme_set_preferred_path", 00:06:04.215 "bdev_nvme_get_io_paths", 00:06:04.215 "bdev_nvme_remove_error_injection", 00:06:04.215 "bdev_nvme_add_error_injection", 00:06:04.215 "bdev_nvme_get_discovery_info", 00:06:04.215 "bdev_nvme_stop_discovery", 00:06:04.215 "bdev_nvme_start_discovery", 00:06:04.215 "bdev_nvme_get_controller_health_info", 00:06:04.215 "bdev_nvme_disable_controller", 00:06:04.215 "bdev_nvme_enable_controller", 00:06:04.215 "bdev_nvme_reset_controller", 00:06:04.215 "bdev_nvme_get_transport_statistics", 00:06:04.215 "bdev_nvme_apply_firmware", 00:06:04.215 "bdev_nvme_detach_controller", 00:06:04.215 "bdev_nvme_get_controllers", 00:06:04.215 "bdev_nvme_attach_controller", 00:06:04.215 "bdev_nvme_set_hotplug", 00:06:04.215 "bdev_nvme_set_options", 00:06:04.215 "bdev_passthru_delete", 00:06:04.215 "bdev_passthru_create", 00:06:04.215 "bdev_lvol_set_parent_bdev", 00:06:04.215 "bdev_lvol_set_parent", 00:06:04.215 "bdev_lvol_check_shallow_copy", 00:06:04.215 "bdev_lvol_start_shallow_copy", 00:06:04.215 "bdev_lvol_grow_lvstore", 00:06:04.215 "bdev_lvol_get_lvols", 00:06:04.215 "bdev_lvol_get_lvstores", 00:06:04.215 "bdev_lvol_delete", 00:06:04.215 "bdev_lvol_set_read_only", 00:06:04.215 "bdev_lvol_resize", 00:06:04.215 "bdev_lvol_decouple_parent", 00:06:04.215 "bdev_lvol_inflate", 00:06:04.215 "bdev_lvol_rename", 00:06:04.215 "bdev_lvol_clone_bdev", 00:06:04.215 "bdev_lvol_clone", 00:06:04.215 "bdev_lvol_snapshot", 00:06:04.215 "bdev_lvol_create", 00:06:04.215 "bdev_lvol_delete_lvstore", 00:06:04.215 "bdev_lvol_rename_lvstore", 00:06:04.215 "bdev_lvol_create_lvstore", 00:06:04.215 "bdev_raid_set_options", 00:06:04.215 "bdev_raid_remove_base_bdev", 00:06:04.215 "bdev_raid_add_base_bdev", 00:06:04.215 "bdev_raid_delete", 00:06:04.215 "bdev_raid_create", 00:06:04.215 "bdev_raid_get_bdevs", 00:06:04.215 "bdev_error_inject_error", 00:06:04.215 "bdev_error_delete", 00:06:04.215 "bdev_error_create", 00:06:04.215 "bdev_split_delete", 00:06:04.215 "bdev_split_create", 00:06:04.215 "bdev_delay_delete", 00:06:04.215 "bdev_delay_create", 00:06:04.215 "bdev_delay_update_latency", 00:06:04.215 "bdev_zone_block_delete", 00:06:04.215 "bdev_zone_block_create", 00:06:04.215 "blobfs_create", 00:06:04.215 "blobfs_detect", 00:06:04.215 "blobfs_set_cache_size", 00:06:04.215 "bdev_aio_delete", 00:06:04.215 "bdev_aio_rescan", 00:06:04.215 "bdev_aio_create", 00:06:04.215 "bdev_ftl_set_property", 00:06:04.215 "bdev_ftl_get_properties", 00:06:04.215 "bdev_ftl_get_stats", 00:06:04.215 "bdev_ftl_unmap", 00:06:04.215 "bdev_ftl_unload", 00:06:04.215 "bdev_ftl_delete", 00:06:04.215 "bdev_ftl_load", 00:06:04.215 "bdev_ftl_create", 00:06:04.215 "bdev_virtio_attach_controller", 00:06:04.215 "bdev_virtio_scsi_get_devices", 00:06:04.215 "bdev_virtio_detach_controller", 00:06:04.215 "bdev_virtio_blk_set_hotplug", 00:06:04.215 "bdev_iscsi_delete", 00:06:04.215 "bdev_iscsi_create", 00:06:04.215 "bdev_iscsi_set_options", 00:06:04.215 "bdev_uring_delete", 00:06:04.215 "bdev_uring_rescan", 00:06:04.215 "bdev_uring_create", 00:06:04.215 "accel_error_inject_error", 00:06:04.215 "ioat_scan_accel_module", 00:06:04.215 "dsa_scan_accel_module", 00:06:04.215 "iaa_scan_accel_module", 00:06:04.215 "keyring_file_remove_key", 00:06:04.215 "keyring_file_add_key", 00:06:04.215 "keyring_linux_set_options", 00:06:04.215 "fsdev_aio_delete", 00:06:04.215 "fsdev_aio_create", 00:06:04.215 "iscsi_get_histogram", 00:06:04.215 "iscsi_enable_histogram", 00:06:04.215 "iscsi_set_options", 00:06:04.215 "iscsi_get_auth_groups", 00:06:04.215 "iscsi_auth_group_remove_secret", 00:06:04.215 "iscsi_auth_group_add_secret", 00:06:04.215 "iscsi_delete_auth_group", 00:06:04.215 "iscsi_create_auth_group", 00:06:04.215 "iscsi_set_discovery_auth", 00:06:04.215 "iscsi_get_options", 00:06:04.215 "iscsi_target_node_request_logout", 00:06:04.215 "iscsi_target_node_set_redirect", 00:06:04.215 "iscsi_target_node_set_auth", 00:06:04.215 "iscsi_target_node_add_lun", 00:06:04.215 "iscsi_get_stats", 00:06:04.215 "iscsi_get_connections", 00:06:04.215 "iscsi_portal_group_set_auth", 00:06:04.215 "iscsi_start_portal_group", 00:06:04.215 "iscsi_delete_portal_group", 00:06:04.215 "iscsi_create_portal_group", 00:06:04.215 "iscsi_get_portal_groups", 00:06:04.215 "iscsi_delete_target_node", 00:06:04.215 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.215 "iscsi_target_node_add_pg_ig_maps", 00:06:04.215 "iscsi_create_target_node", 00:06:04.215 "iscsi_get_target_nodes", 00:06:04.215 "iscsi_delete_initiator_group", 00:06:04.215 "iscsi_initiator_group_remove_initiators", 00:06:04.215 "iscsi_initiator_group_add_initiators", 00:06:04.215 "iscsi_create_initiator_group", 00:06:04.215 "iscsi_get_initiator_groups", 00:06:04.215 "nvmf_set_crdt", 00:06:04.215 "nvmf_set_config", 00:06:04.215 "nvmf_set_max_subsystems", 00:06:04.215 "nvmf_stop_mdns_prr", 00:06:04.215 "nvmf_publish_mdns_prr", 00:06:04.215 "nvmf_subsystem_get_listeners", 00:06:04.215 "nvmf_subsystem_get_qpairs", 00:06:04.215 "nvmf_subsystem_get_controllers", 00:06:04.215 "nvmf_get_stats", 00:06:04.216 "nvmf_get_transports", 00:06:04.216 "nvmf_create_transport", 00:06:04.216 "nvmf_get_targets", 00:06:04.216 "nvmf_delete_target", 00:06:04.216 "nvmf_create_target", 00:06:04.216 "nvmf_subsystem_allow_any_host", 00:06:04.216 "nvmf_subsystem_set_keys", 00:06:04.216 "nvmf_subsystem_remove_host", 00:06:04.216 "nvmf_subsystem_add_host", 00:06:04.216 "nvmf_ns_remove_host", 00:06:04.216 "nvmf_ns_add_host", 00:06:04.216 "nvmf_subsystem_remove_ns", 00:06:04.216 "nvmf_subsystem_set_ns_ana_group", 00:06:04.216 "nvmf_subsystem_add_ns", 00:06:04.216 "nvmf_subsystem_listener_set_ana_state", 00:06:04.216 "nvmf_discovery_get_referrals", 00:06:04.216 "nvmf_discovery_remove_referral", 00:06:04.216 "nvmf_discovery_add_referral", 00:06:04.216 "nvmf_subsystem_remove_listener", 00:06:04.216 "nvmf_subsystem_add_listener", 00:06:04.216 "nvmf_delete_subsystem", 00:06:04.216 "nvmf_create_subsystem", 00:06:04.216 "nvmf_get_subsystems", 00:06:04.216 "env_dpdk_get_mem_stats", 00:06:04.216 "nbd_get_disks", 00:06:04.216 "nbd_stop_disk", 00:06:04.216 "nbd_start_disk", 00:06:04.216 "ublk_recover_disk", 00:06:04.216 "ublk_get_disks", 00:06:04.216 "ublk_stop_disk", 00:06:04.216 "ublk_start_disk", 00:06:04.216 "ublk_destroy_target", 00:06:04.216 "ublk_create_target", 00:06:04.216 "virtio_blk_create_transport", 00:06:04.216 "virtio_blk_get_transports", 00:06:04.216 "vhost_controller_set_coalescing", 00:06:04.216 "vhost_get_controllers", 00:06:04.216 "vhost_delete_controller", 00:06:04.216 "vhost_create_blk_controller", 00:06:04.216 "vhost_scsi_controller_remove_target", 00:06:04.216 "vhost_scsi_controller_add_target", 00:06:04.216 "vhost_start_scsi_controller", 00:06:04.216 "vhost_create_scsi_controller", 00:06:04.216 "thread_set_cpumask", 00:06:04.216 "scheduler_set_options", 00:06:04.216 "framework_get_governor", 00:06:04.216 "framework_get_scheduler", 00:06:04.216 "framework_set_scheduler", 00:06:04.216 "framework_get_reactors", 00:06:04.216 "thread_get_io_channels", 00:06:04.216 "thread_get_pollers", 00:06:04.216 "thread_get_stats", 00:06:04.216 "framework_monitor_context_switch", 00:06:04.216 "spdk_kill_instance", 00:06:04.216 "log_enable_timestamps", 00:06:04.216 "log_get_flags", 00:06:04.216 "log_clear_flag", 00:06:04.216 "log_set_flag", 00:06:04.216 "log_get_level", 00:06:04.216 "log_set_level", 00:06:04.216 "log_get_print_level", 00:06:04.216 "log_set_print_level", 00:06:04.216 "framework_enable_cpumask_locks", 00:06:04.216 "framework_disable_cpumask_locks", 00:06:04.216 "framework_wait_init", 00:06:04.216 "framework_start_init", 00:06:04.216 "scsi_get_devices", 00:06:04.216 "bdev_get_histogram", 00:06:04.216 "bdev_enable_histogram", 00:06:04.216 "bdev_set_qos_limit", 00:06:04.216 "bdev_set_qd_sampling_period", 00:06:04.216 "bdev_get_bdevs", 00:06:04.216 "bdev_reset_iostat", 00:06:04.216 "bdev_get_iostat", 00:06:04.216 "bdev_examine", 00:06:04.216 "bdev_wait_for_examine", 00:06:04.216 "bdev_set_options", 00:06:04.216 "accel_get_stats", 00:06:04.216 "accel_set_options", 00:06:04.216 "accel_set_driver", 00:06:04.216 "accel_crypto_key_destroy", 00:06:04.216 "accel_crypto_keys_get", 00:06:04.216 "accel_crypto_key_create", 00:06:04.216 "accel_assign_opc", 00:06:04.216 "accel_get_module_info", 00:06:04.216 "accel_get_opc_assignments", 00:06:04.216 "vmd_rescan", 00:06:04.216 "vmd_remove_device", 00:06:04.216 "vmd_enable", 00:06:04.216 "sock_get_default_impl", 00:06:04.216 "sock_set_default_impl", 00:06:04.216 "sock_impl_set_options", 00:06:04.216 "sock_impl_get_options", 00:06:04.216 "iobuf_get_stats", 00:06:04.216 "iobuf_set_options", 00:06:04.216 "keyring_get_keys", 00:06:04.216 "framework_get_pci_devices", 00:06:04.216 "framework_get_config", 00:06:04.216 "framework_get_subsystems", 00:06:04.216 "fsdev_set_opts", 00:06:04.216 "fsdev_get_opts", 00:06:04.216 "trace_get_info", 00:06:04.216 "trace_get_tpoint_group_mask", 00:06:04.216 "trace_disable_tpoint_group", 00:06:04.216 "trace_enable_tpoint_group", 00:06:04.216 "trace_clear_tpoint_mask", 00:06:04.216 "trace_set_tpoint_mask", 00:06:04.216 "notify_get_notifications", 00:06:04.216 "notify_get_types", 00:06:04.216 "spdk_get_version", 00:06:04.216 "rpc_get_methods" 00:06:04.216 ] 00:06:04.216 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.216 14:44:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.216 14:44:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.474 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.474 14:44:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57946 00:06:04.474 14:44:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57946 ']' 00:06:04.474 14:44:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57946 00:06:04.474 14:44:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:04.474 14:44:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.474 14:44:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57946 00:06:04.475 14:44:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.475 killing process with pid 57946 00:06:04.475 14:44:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.475 14:44:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57946' 00:06:04.475 14:44:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57946 00:06:04.475 14:44:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57946 00:06:04.734 00:06:04.734 real 0m1.958s 00:06:04.734 user 0m3.676s 00:06:04.734 sys 0m0.494s 00:06:04.734 14:44:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.734 ************************************ 00:06:04.734 END TEST spdkcli_tcp 00:06:04.734 ************************************ 00:06:04.735 14:44:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 14:44:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.735 14:44:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.735 14:44:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.735 14:44:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 ************************************ 00:06:04.735 START TEST dpdk_mem_utility 00:06:04.735 ************************************ 00:06:04.735 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.994 * Looking for test storage... 00:06:04.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:04.994 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.994 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.994 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.994 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.994 14:44:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.995 14:44:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.995 --rc genhtml_branch_coverage=1 00:06:04.995 --rc genhtml_function_coverage=1 00:06:04.995 --rc genhtml_legend=1 00:06:04.995 --rc geninfo_all_blocks=1 00:06:04.995 --rc geninfo_unexecuted_blocks=1 00:06:04.995 00:06:04.995 ' 00:06:04.995 14:44:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.995 14:44:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58040 00:06:04.995 14:44:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58040 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58040 ']' 00:06:04.995 14:44:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.995 14:44:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.995 [2024-11-22 14:44:19.607462] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:04.995 [2024-11-22 14:44:19.607580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:06:05.254 [2024-11-22 14:44:19.759211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.254 [2024-11-22 14:44:19.832774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.254 [2024-11-22 14:44:19.914814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.579 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.579 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:05.579 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.579 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.579 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.579 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.579 { 00:06:05.579 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.579 } 00:06:05.579 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.579 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.579 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:05.579 1 heaps totaling size 810.000000 MiB 00:06:05.579 size: 810.000000 MiB heap id: 0 00:06:05.579 end heaps---------- 00:06:05.579 9 mempools totaling size 595.772034 MiB 00:06:05.579 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.579 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.579 size: 92.545471 MiB name: bdev_io_58040 00:06:05.579 size: 50.003479 MiB name: msgpool_58040 00:06:05.579 size: 36.509338 MiB name: fsdev_io_58040 00:06:05.579 size: 21.763794 MiB name: PDU_Pool 00:06:05.579 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.579 size: 4.133484 MiB name: evtpool_58040 00:06:05.579 size: 0.026123 MiB name: Session_Pool 00:06:05.579 end mempools------- 00:06:05.579 6 memzones totaling size 4.142822 MiB 00:06:05.579 size: 1.000366 MiB name: RG_ring_0_58040 00:06:05.579 size: 1.000366 MiB name: RG_ring_1_58040 00:06:05.579 size: 1.000366 MiB name: RG_ring_4_58040 00:06:05.579 size: 1.000366 MiB name: RG_ring_5_58040 00:06:05.579 size: 0.125366 MiB name: RG_ring_2_58040 00:06:05.579 size: 0.015991 MiB name: RG_ring_3_58040 00:06:05.579 end memzones------- 00:06:05.579 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.840 heap id: 0 total size: 810.000000 MiB number of busy elements: 317 number of free elements: 15 00:06:05.840 list of free elements. size: 10.812500 MiB 00:06:05.840 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:05.840 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:05.840 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:05.840 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:05.840 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:05.840 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:05.840 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:05.840 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:05.840 element at address: 0x20001a600000 with size: 0.566956 MiB 00:06:05.840 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:05.840 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:05.840 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:05.840 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:05.840 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:05.840 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:05.840 list of standard malloc elements. size: 199.268616 MiB 00:06:05.840 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:05.840 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:05.840 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:05.840 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:05.840 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:05.840 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.840 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:05.840 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.840 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:05.841 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:05.841 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:05.841 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:05.842 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:05.842 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:05.842 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:05.843 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:05.843 list of memzone associated elements. size: 599.918884 MiB 00:06:05.843 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:05.843 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.843 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:05.843 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.843 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:05.843 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58040_0 00:06:05.843 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:05.843 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58040_0 00:06:05.843 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:05.843 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58040_0 00:06:05.843 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:05.843 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.843 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:05.843 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.843 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:05.843 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58040_0 00:06:05.843 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:05.843 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58040 00:06:05.843 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.843 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58040 00:06:05.843 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:05.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.843 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:05.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.843 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:05.843 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.843 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:05.843 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.843 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:05.843 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58040 00:06:05.843 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:05.843 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58040 00:06:05.843 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:05.843 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58040 00:06:05.843 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:05.843 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58040 00:06:05.843 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:05.843 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58040 00:06:05.843 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:05.843 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58040 00:06:05.843 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:05.843 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.843 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:05.843 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.843 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:05.843 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.843 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:05.843 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58040 00:06:05.843 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:05.843 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58040 00:06:05.843 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:05.843 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.844 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:05.844 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.844 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:05.844 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58040 00:06:05.844 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:05.844 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.844 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:05.844 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58040 00:06:05.844 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:05.844 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58040 00:06:05.844 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:05.844 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58040 00:06:05.844 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:05.844 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.844 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.844 14:44:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58040 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58040 ']' 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58040 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58040 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.844 killing process with pid 58040 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58040' 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58040 00:06:05.844 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58040 00:06:06.413 00:06:06.413 real 0m1.410s 00:06:06.413 user 0m1.381s 00:06:06.413 sys 0m0.466s 00:06:06.413 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.413 14:44:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.413 ************************************ 00:06:06.413 END TEST dpdk_mem_utility 00:06:06.413 ************************************ 00:06:06.413 14:44:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.413 14:44:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.413 14:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.413 14:44:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.413 ************************************ 00:06:06.413 START TEST event 00:06:06.413 ************************************ 00:06:06.413 14:44:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.413 * Looking for test storage... 00:06:06.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.413 14:44:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.413 14:44:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.413 14:44:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.413 14:44:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.413 14:44:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.413 14:44:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.413 14:44:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.413 14:44:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.413 14:44:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.413 14:44:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.413 14:44:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.413 14:44:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.413 14:44:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.413 14:44:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.413 14:44:21 event -- scripts/common.sh@344 -- # case "$op" in 00:06:06.413 14:44:21 event -- scripts/common.sh@345 -- # : 1 00:06:06.413 14:44:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.413 14:44:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.413 14:44:21 event -- scripts/common.sh@365 -- # decimal 1 00:06:06.413 14:44:21 event -- scripts/common.sh@353 -- # local d=1 00:06:06.413 14:44:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.413 14:44:21 event -- scripts/common.sh@355 -- # echo 1 00:06:06.413 14:44:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.413 14:44:21 event -- scripts/common.sh@366 -- # decimal 2 00:06:06.413 14:44:21 event -- scripts/common.sh@353 -- # local d=2 00:06:06.413 14:44:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.413 14:44:21 event -- scripts/common.sh@355 -- # echo 2 00:06:06.413 14:44:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.413 14:44:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.413 14:44:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.413 14:44:21 event -- scripts/common.sh@368 -- # return 0 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.413 --rc genhtml_branch_coverage=1 00:06:06.413 --rc genhtml_function_coverage=1 00:06:06.413 --rc genhtml_legend=1 00:06:06.413 --rc geninfo_all_blocks=1 00:06:06.413 --rc geninfo_unexecuted_blocks=1 00:06:06.413 00:06:06.413 ' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.413 --rc genhtml_branch_coverage=1 00:06:06.413 --rc genhtml_function_coverage=1 00:06:06.413 --rc genhtml_legend=1 00:06:06.413 --rc geninfo_all_blocks=1 00:06:06.413 --rc geninfo_unexecuted_blocks=1 00:06:06.413 00:06:06.413 ' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.413 --rc genhtml_branch_coverage=1 00:06:06.413 --rc genhtml_function_coverage=1 00:06:06.413 --rc genhtml_legend=1 00:06:06.413 --rc geninfo_all_blocks=1 00:06:06.413 --rc geninfo_unexecuted_blocks=1 00:06:06.413 00:06:06.413 ' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.413 --rc genhtml_branch_coverage=1 00:06:06.413 --rc genhtml_function_coverage=1 00:06:06.413 --rc genhtml_legend=1 00:06:06.413 --rc geninfo_all_blocks=1 00:06:06.413 --rc geninfo_unexecuted_blocks=1 00:06:06.413 00:06:06.413 ' 00:06:06.413 14:44:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.413 14:44:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.413 14:44:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:06.413 14:44:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.413 14:44:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.413 ************************************ 00:06:06.413 START TEST event_perf 00:06:06.413 ************************************ 00:06:06.413 14:44:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.413 Running I/O for 1 seconds...[2024-11-22 14:44:21.048273] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:06.413 [2024-11-22 14:44:21.048430] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58123 ] 00:06:06.672 [2024-11-22 14:44:21.200800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.672 [2024-11-22 14:44:21.268269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.672 [2024-11-22 14:44:21.268420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.672 [2024-11-22 14:44:21.268566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.672 [2024-11-22 14:44:21.268572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.047 Running I/O for 1 seconds... 00:06:08.047 lcore 0: 188545 00:06:08.047 lcore 1: 188546 00:06:08.047 lcore 2: 188546 00:06:08.047 lcore 3: 188545 00:06:08.047 done. 00:06:08.047 00:06:08.047 real 0m1.293s 00:06:08.047 user 0m4.116s 00:06:08.047 sys 0m0.055s 00:06:08.047 14:44:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.047 ************************************ 00:06:08.047 14:44:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.047 END TEST event_perf 00:06:08.047 ************************************ 00:06:08.047 14:44:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.047 14:44:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.047 14:44:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.047 14:44:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.047 ************************************ 00:06:08.047 START TEST event_reactor 00:06:08.047 ************************************ 00:06:08.047 14:44:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.047 [2024-11-22 14:44:22.391026] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:08.047 [2024-11-22 14:44:22.391121] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58156 ] 00:06:08.047 [2024-11-22 14:44:22.541207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.047 [2024-11-22 14:44:22.600671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.423 test_start 00:06:09.423 oneshot 00:06:09.423 tick 100 00:06:09.423 tick 100 00:06:09.423 tick 250 00:06:09.423 tick 100 00:06:09.423 tick 100 00:06:09.423 tick 250 00:06:09.423 tick 100 00:06:09.423 tick 500 00:06:09.423 tick 100 00:06:09.423 tick 100 00:06:09.423 tick 250 00:06:09.423 tick 100 00:06:09.423 tick 100 00:06:09.423 test_end 00:06:09.423 00:06:09.423 real 0m1.283s 00:06:09.423 user 0m1.130s 00:06:09.423 sys 0m0.045s 00:06:09.423 14:44:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.423 14:44:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.423 ************************************ 00:06:09.423 END TEST event_reactor 00:06:09.423 ************************************ 00:06:09.423 14:44:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.424 14:44:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:09.424 14:44:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.424 14:44:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 START TEST event_reactor_perf 00:06:09.424 ************************************ 00:06:09.424 14:44:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.424 [2024-11-22 14:44:23.730300] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:09.424 [2024-11-22 14:44:23.730483] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58187 ] 00:06:09.424 [2024-11-22 14:44:23.880428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.424 [2024-11-22 14:44:23.939506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.360 test_start 00:06:10.360 test_end 00:06:10.360 Performance: 386314 events per second 00:06:10.360 00:06:10.360 real 0m1.280s 00:06:10.360 user 0m1.122s 00:06:10.360 sys 0m0.051s 00:06:10.360 14:44:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.360 14:44:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.360 ************************************ 00:06:10.360 END TEST event_reactor_perf 00:06:10.360 ************************************ 00:06:10.618 14:44:25 event -- event/event.sh@49 -- # uname -s 00:06:10.618 14:44:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.618 14:44:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.618 14:44:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.618 14:44:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.618 14:44:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.618 ************************************ 00:06:10.618 START TEST event_scheduler 00:06:10.618 ************************************ 00:06:10.618 14:44:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.618 * Looking for test storage... 00:06:10.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:10.618 14:44:25 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.618 14:44:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.618 14:44:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.618 14:44:25 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.618 14:44:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.619 14:44:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.619 --rc genhtml_branch_coverage=1 00:06:10.619 --rc genhtml_function_coverage=1 00:06:10.619 --rc genhtml_legend=1 00:06:10.619 --rc geninfo_all_blocks=1 00:06:10.619 --rc geninfo_unexecuted_blocks=1 00:06:10.619 00:06:10.619 ' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.619 --rc genhtml_branch_coverage=1 00:06:10.619 --rc genhtml_function_coverage=1 00:06:10.619 --rc genhtml_legend=1 00:06:10.619 --rc geninfo_all_blocks=1 00:06:10.619 --rc geninfo_unexecuted_blocks=1 00:06:10.619 00:06:10.619 ' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.619 --rc genhtml_branch_coverage=1 00:06:10.619 --rc genhtml_function_coverage=1 00:06:10.619 --rc genhtml_legend=1 00:06:10.619 --rc geninfo_all_blocks=1 00:06:10.619 --rc geninfo_unexecuted_blocks=1 00:06:10.619 00:06:10.619 ' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.619 --rc genhtml_branch_coverage=1 00:06:10.619 --rc genhtml_function_coverage=1 00:06:10.619 --rc genhtml_legend=1 00:06:10.619 --rc geninfo_all_blocks=1 00:06:10.619 --rc geninfo_unexecuted_blocks=1 00:06:10.619 00:06:10.619 ' 00:06:10.619 14:44:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.619 14:44:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58261 00:06:10.619 14:44:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.619 14:44:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58261 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58261 ']' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.619 14:44:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.619 14:44:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.877 [2024-11-22 14:44:25.317956] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:10.877 [2024-11-22 14:44:25.318111] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58261 ] 00:06:10.877 [2024-11-22 14:44:25.473963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.135 [2024-11-22 14:44:25.542144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.135 [2024-11-22 14:44:25.542288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.135 [2024-11-22 14:44:25.542996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.135 [2024-11-22 14:44:25.543062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:11.702 14:44:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.702 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.702 POWER: Cannot set governor of lcore 0 to performance 00:06:11.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.702 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.702 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.702 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.702 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:11.702 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:11.702 POWER: Unable to set Power Management Environment for lcore 0 00:06:11.702 [2024-11-22 14:44:26.340884] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:11.702 [2024-11-22 14:44:26.340903] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:11.702 [2024-11-22 14:44:26.340947] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:11.702 [2024-11-22 14:44:26.340963] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.702 [2024-11-22 14:44:26.340974] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.702 [2024-11-22 14:44:26.340987] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.702 14:44:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.702 14:44:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 [2024-11-22 14:44:26.410545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.961 [2024-11-22 14:44:26.449840] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.961 14:44:26 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.961 14:44:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.961 14:44:26 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 ************************************ 00:06:11.961 START TEST scheduler_create_thread 00:06:11.961 ************************************ 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 2 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 3 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 4 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 5 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 6 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 7 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 8 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 9 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 10 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.961 14:44:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.862 14:44:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.862 14:44:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.862 14:44:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.862 14:44:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.862 14:44:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.428 14:44:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.428 00:06:14.428 real 0m2.617s 00:06:14.428 user 0m0.019s 00:06:14.428 sys 0m0.008s 00:06:14.428 14:44:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.428 14:44:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.428 ************************************ 00:06:14.428 END TEST scheduler_create_thread 00:06:14.428 ************************************ 00:06:14.686 14:44:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.686 14:44:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58261 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58261 ']' 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58261 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58261 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.686 killing process with pid 58261 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58261' 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58261 00:06:14.686 14:44:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58261 00:06:14.944 [2024-11-22 14:44:29.558351] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.232 ************************************ 00:06:15.232 END TEST event_scheduler 00:06:15.232 ************************************ 00:06:15.232 00:06:15.232 real 0m4.722s 00:06:15.232 user 0m9.014s 00:06:15.232 sys 0m0.412s 00:06:15.232 14:44:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.232 14:44:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.232 14:44:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.232 14:44:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.232 14:44:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.232 14:44:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.232 14:44:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.232 ************************************ 00:06:15.232 START TEST app_repeat 00:06:15.232 ************************************ 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58355 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.232 Process app_repeat pid: 58355 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58355' 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.232 spdk_app_start Round 0 00:06:15.232 14:44:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58355 ']' 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.232 14:44:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.232 [2024-11-22 14:44:29.857499] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:15.232 [2024-11-22 14:44:29.857600] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58355 ] 00:06:15.533 [2024-11-22 14:44:30.004310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.533 [2024-11-22 14:44:30.067854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.533 [2024-11-22 14:44:30.067877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.533 [2024-11-22 14:44:30.125937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.533 14:44:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.533 14:44:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.533 14:44:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.792 Malloc0 00:06:15.792 14:44:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.358 Malloc1 00:06:16.358 14:44:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.358 14:44:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.617 /dev/nbd0 00:06:16.617 14:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.617 14:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.617 1+0 records in 00:06:16.617 1+0 records out 00:06:16.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287587 s, 14.2 MB/s 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.617 14:44:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.617 14:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.617 14:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.617 14:44:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.876 /dev/nbd1 00:06:16.876 14:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.876 14:44:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.876 1+0 records in 00:06:16.876 1+0 records out 00:06:16.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304855 s, 13.4 MB/s 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.876 14:44:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.135 14:44:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.135 14:44:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.135 14:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.135 14:44:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.135 14:44:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.135 14:44:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.135 14:44:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.393 { 00:06:17.393 "nbd_device": "/dev/nbd0", 00:06:17.393 "bdev_name": "Malloc0" 00:06:17.393 }, 00:06:17.393 { 00:06:17.393 "nbd_device": "/dev/nbd1", 00:06:17.393 "bdev_name": "Malloc1" 00:06:17.393 } 00:06:17.393 ]' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.393 { 00:06:17.393 "nbd_device": "/dev/nbd0", 00:06:17.393 "bdev_name": "Malloc0" 00:06:17.393 }, 00:06:17.393 { 00:06:17.393 "nbd_device": "/dev/nbd1", 00:06:17.393 "bdev_name": "Malloc1" 00:06:17.393 } 00:06:17.393 ]' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.393 /dev/nbd1' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.393 /dev/nbd1' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.393 256+0 records in 00:06:17.393 256+0 records out 00:06:17.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107681 s, 97.4 MB/s 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.393 256+0 records in 00:06:17.393 256+0 records out 00:06:17.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023504 s, 44.6 MB/s 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.393 14:44:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.393 256+0 records in 00:06:17.393 256+0 records out 00:06:17.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272097 s, 38.5 MB/s 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.394 14:44:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.394 14:44:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.652 14:44:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.219 14:44:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.476 14:44:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.476 14:44:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.476 14:44:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.476 14:44:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.476 14:44:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.734 14:44:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.992 [2024-11-22 14:44:33.522638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.992 [2024-11-22 14:44:33.575136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.992 [2024-11-22 14:44:33.575148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.992 [2024-11-22 14:44:33.629182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.992 [2024-11-22 14:44:33.629315] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.992 [2024-11-22 14:44:33.629329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.275 14:44:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.275 spdk_app_start Round 1 00:06:22.275 14:44:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:22.275 14:44:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58355 ']' 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.275 14:44:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:22.275 14:44:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.534 Malloc0 00:06:22.534 14:44:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.793 Malloc1 00:06:22.793 14:44:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.793 14:44:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.052 /dev/nbd0 00:06:23.052 14:44:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.052 14:44:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.052 1+0 records in 00:06:23.052 1+0 records out 00:06:23.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364997 s, 11.2 MB/s 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.052 14:44:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.052 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.052 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.052 14:44:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.310 /dev/nbd1 00:06:23.310 14:44:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.310 14:44:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.310 14:44:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.310 1+0 records in 00:06:23.310 1+0 records out 00:06:23.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351589 s, 11.6 MB/s 00:06:23.311 14:44:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.311 14:44:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.311 14:44:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.311 14:44:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.311 14:44:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.311 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.311 14:44:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.311 14:44:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.311 14:44:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.311 14:44:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.878 { 00:06:23.878 "nbd_device": "/dev/nbd0", 00:06:23.878 "bdev_name": "Malloc0" 00:06:23.878 }, 00:06:23.878 { 00:06:23.878 "nbd_device": "/dev/nbd1", 00:06:23.878 "bdev_name": "Malloc1" 00:06:23.878 } 00:06:23.878 ]' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.878 { 00:06:23.878 "nbd_device": "/dev/nbd0", 00:06:23.878 "bdev_name": "Malloc0" 00:06:23.878 }, 00:06:23.878 { 00:06:23.878 "nbd_device": "/dev/nbd1", 00:06:23.878 "bdev_name": "Malloc1" 00:06:23.878 } 00:06:23.878 ]' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.878 /dev/nbd1' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.878 /dev/nbd1' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.878 256+0 records in 00:06:23.878 256+0 records out 00:06:23.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637643 s, 164 MB/s 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.878 256+0 records in 00:06:23.878 256+0 records out 00:06:23.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255641 s, 41.0 MB/s 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.878 256+0 records in 00:06:23.878 256+0 records out 00:06:23.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316276 s, 33.2 MB/s 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.878 14:44:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.879 14:44:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.137 14:44:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.396 14:44:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.968 14:44:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.968 14:44:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.237 14:44:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.496 [2024-11-22 14:44:40.019125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.496 [2024-11-22 14:44:40.102130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.496 [2024-11-22 14:44:40.102140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.755 [2024-11-22 14:44:40.186907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.755 [2024-11-22 14:44:40.187033] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.755 [2024-11-22 14:44:40.187049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.290 14:44:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.290 spdk_app_start Round 2 00:06:28.290 14:44:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.290 14:44:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58355 ']' 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.290 14:44:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.549 14:44:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.549 14:44:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.549 14:44:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.807 Malloc0 00:06:28.807 14:44:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.065 Malloc1 00:06:29.065 14:44:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.065 14:44:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.324 /dev/nbd0 00:06:29.324 14:44:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.324 14:44:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.324 1+0 records in 00:06:29.324 1+0 records out 00:06:29.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345754 s, 11.8 MB/s 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.324 14:44:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.324 14:44:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.324 14:44:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.324 14:44:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.890 /dev/nbd1 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.890 1+0 records in 00:06:29.890 1+0 records out 00:06:29.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333072 s, 12.3 MB/s 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.890 14:44:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.890 14:44:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.147 { 00:06:30.147 "nbd_device": "/dev/nbd0", 00:06:30.147 "bdev_name": "Malloc0" 00:06:30.147 }, 00:06:30.147 { 00:06:30.147 "nbd_device": "/dev/nbd1", 00:06:30.147 "bdev_name": "Malloc1" 00:06:30.147 } 00:06:30.147 ]' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.147 { 00:06:30.147 "nbd_device": "/dev/nbd0", 00:06:30.147 "bdev_name": "Malloc0" 00:06:30.147 }, 00:06:30.147 { 00:06:30.147 "nbd_device": "/dev/nbd1", 00:06:30.147 "bdev_name": "Malloc1" 00:06:30.147 } 00:06:30.147 ]' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.147 /dev/nbd1' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.147 /dev/nbd1' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.147 256+0 records in 00:06:30.147 256+0 records out 00:06:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106778 s, 98.2 MB/s 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.147 256+0 records in 00:06:30.147 256+0 records out 00:06:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249733 s, 42.0 MB/s 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.147 14:44:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.147 256+0 records in 00:06:30.147 256+0 records out 00:06:30.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258417 s, 40.6 MB/s 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.148 14:44:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.405 14:44:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.973 14:44:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.231 14:44:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.231 14:44:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.490 14:44:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.749 [2024-11-22 14:44:46.324031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.749 [2024-11-22 14:44:46.406222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.749 [2024-11-22 14:44:46.406235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.005 [2024-11-22 14:44:46.491261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.005 [2024-11-22 14:44:46.491390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.005 [2024-11-22 14:44:46.491406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.532 14:44:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58355 /var/tmp/spdk-nbd.sock 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58355 ']' 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.532 14:44:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.790 14:44:49 event.app_repeat -- event/event.sh@39 -- # killprocess 58355 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58355 ']' 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58355 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58355 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58355' 00:06:34.790 killing process with pid 58355 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58355 00:06:34.790 14:44:49 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58355 00:06:35.048 spdk_app_start is called in Round 0. 00:06:35.048 Shutdown signal received, stop current app iteration 00:06:35.048 Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 reinitialization... 00:06:35.048 spdk_app_start is called in Round 1. 00:06:35.048 Shutdown signal received, stop current app iteration 00:06:35.048 Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 reinitialization... 00:06:35.048 spdk_app_start is called in Round 2. 00:06:35.048 Shutdown signal received, stop current app iteration 00:06:35.048 Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 reinitialization... 00:06:35.048 spdk_app_start is called in Round 3. 00:06:35.048 Shutdown signal received, stop current app iteration 00:06:35.048 14:44:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.048 14:44:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.048 00:06:35.048 real 0m19.791s 00:06:35.048 user 0m45.256s 00:06:35.048 sys 0m3.142s 00:06:35.048 14:44:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.048 14:44:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.048 ************************************ 00:06:35.048 END TEST app_repeat 00:06:35.048 ************************************ 00:06:35.048 14:44:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.048 14:44:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.048 14:44:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.049 14:44:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.049 14:44:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.049 ************************************ 00:06:35.049 START TEST cpu_locks 00:06:35.049 ************************************ 00:06:35.049 14:44:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.306 * Looking for test storage... 00:06:35.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.306 14:44:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.306 14:44:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.306 14:44:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.306 14:44:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.306 14:44:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.307 14:44:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.307 --rc genhtml_branch_coverage=1 00:06:35.307 --rc genhtml_function_coverage=1 00:06:35.307 --rc genhtml_legend=1 00:06:35.307 --rc geninfo_all_blocks=1 00:06:35.307 --rc geninfo_unexecuted_blocks=1 00:06:35.307 00:06:35.307 ' 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.307 --rc genhtml_branch_coverage=1 00:06:35.307 --rc genhtml_function_coverage=1 00:06:35.307 --rc genhtml_legend=1 00:06:35.307 --rc geninfo_all_blocks=1 00:06:35.307 --rc geninfo_unexecuted_blocks=1 00:06:35.307 00:06:35.307 ' 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.307 --rc genhtml_branch_coverage=1 00:06:35.307 --rc genhtml_function_coverage=1 00:06:35.307 --rc genhtml_legend=1 00:06:35.307 --rc geninfo_all_blocks=1 00:06:35.307 --rc geninfo_unexecuted_blocks=1 00:06:35.307 00:06:35.307 ' 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.307 --rc genhtml_branch_coverage=1 00:06:35.307 --rc genhtml_function_coverage=1 00:06:35.307 --rc genhtml_legend=1 00:06:35.307 --rc geninfo_all_blocks=1 00:06:35.307 --rc geninfo_unexecuted_blocks=1 00:06:35.307 00:06:35.307 ' 00:06:35.307 14:44:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.307 14:44:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.307 14:44:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.307 14:44:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.307 14:44:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.307 ************************************ 00:06:35.307 START TEST default_locks 00:06:35.307 ************************************ 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58810 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58810 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.307 14:44:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.307 [2024-11-22 14:44:49.955363] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:35.307 [2024-11-22 14:44:49.955516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:06:35.565 [2024-11-22 14:44:50.100825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.566 [2024-11-22 14:44:50.166880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.824 [2024-11-22 14:44:50.245636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.824 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.824 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:35.824 14:44:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58810 00:06:35.824 14:44:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58810 00:06:35.824 14:44:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58810 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58810 ']' 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58810 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58810 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.391 killing process with pid 58810 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58810' 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58810 00:06:36.391 14:44:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58810 00:06:36.956 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58810 00:06:36.956 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:36.956 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58810 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58810 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 ERROR: process (pid: 58810) is no longer running 00:06:36.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58810) - No such process 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.957 00:06:36.957 real 0m1.475s 00:06:36.957 user 0m1.421s 00:06:36.957 sys 0m0.575s 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.957 14:44:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 ************************************ 00:06:36.957 END TEST default_locks 00:06:36.957 ************************************ 00:06:36.957 14:44:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.957 14:44:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.957 14:44:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.957 14:44:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 ************************************ 00:06:36.957 START TEST default_locks_via_rpc 00:06:36.957 ************************************ 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58849 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58849 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58849 ']' 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.957 14:44:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.957 [2024-11-22 14:44:51.484511] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:36.957 [2024-11-22 14:44:51.484618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58849 ] 00:06:37.215 [2024-11-22 14:44:51.635010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.215 [2024-11-22 14:44:51.706438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.215 [2024-11-22 14:44:51.788953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58849 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58849 00:06:37.473 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58849 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58849 ']' 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58849 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58849 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.039 killing process with pid 58849 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58849' 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58849 00:06:38.039 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58849 00:06:38.606 00:06:38.606 real 0m1.556s 00:06:38.606 user 0m1.519s 00:06:38.606 sys 0m0.607s 00:06:38.606 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.606 14:44:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.606 ************************************ 00:06:38.606 END TEST default_locks_via_rpc 00:06:38.606 ************************************ 00:06:38.606 14:44:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.606 14:44:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.606 14:44:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.606 14:44:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.606 ************************************ 00:06:38.606 START TEST non_locking_app_on_locked_coremask 00:06:38.606 ************************************ 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58898 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58898 /var/tmp/spdk.sock 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58898 ']' 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.606 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.606 [2024-11-22 14:44:53.088874] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:38.606 [2024-11-22 14:44:53.088968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:06:38.606 [2024-11-22 14:44:53.238263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.864 [2024-11-22 14:44:53.312395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.864 [2024-11-22 14:44:53.395937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58906 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58906 /var/tmp/spdk2.sock 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58906 ']' 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.122 14:44:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.122 [2024-11-22 14:44:53.697119] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:39.122 [2024-11-22 14:44:53.697253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:06:39.381 [2024-11-22 14:44:53.865987] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.381 [2024-11-22 14:44:53.866029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.381 [2024-11-22 14:44:54.003658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.640 [2024-11-22 14:44:54.159937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.206 14:44:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.206 14:44:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.206 14:44:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58898 00:06:40.206 14:44:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.206 14:44:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58898 ']' 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.142 killing process with pid 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58898' 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58898 00:06:41.142 14:44:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58898 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58906 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58906 ']' 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58906 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58906 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.079 killing process with pid 58906 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58906' 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58906 00:06:42.079 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58906 00:06:42.338 00:06:42.338 real 0m3.880s 00:06:42.338 user 0m4.286s 00:06:42.338 sys 0m1.192s 00:06:42.338 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.338 14:44:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.338 ************************************ 00:06:42.338 END TEST non_locking_app_on_locked_coremask 00:06:42.338 ************************************ 00:06:42.338 14:44:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.338 14:44:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.338 14:44:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.338 14:44:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.338 ************************************ 00:06:42.338 START TEST locking_app_on_unlocked_coremask 00:06:42.338 ************************************ 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58974 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58974 /var/tmp/spdk.sock 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58974 ']' 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.338 14:44:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.597 [2024-11-22 14:44:57.052037] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:42.597 [2024-11-22 14:44:57.052149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:06:42.597 [2024-11-22 14:44:57.207435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.597 [2024-11-22 14:44:57.207495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.857 [2024-11-22 14:44:57.276343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.857 [2024-11-22 14:44:57.353812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58982 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58982 /var/tmp/spdk2.sock 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58982 ']' 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.116 14:44:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.116 [2024-11-22 14:44:57.648914] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:43.116 [2024-11-22 14:44:57.649044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:06:43.375 [2024-11-22 14:44:57.814718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.375 [2024-11-22 14:44:57.959322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.634 [2024-11-22 14:44:58.117535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.201 14:44:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.201 14:44:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.201 14:44:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58982 00:06:44.201 14:44:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.201 14:44:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58982 00:06:45.134 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58974 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58974 ']' 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58974 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58974 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.135 killing process with pid 58974 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58974' 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58974 00:06:45.135 14:44:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58974 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58982 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58982 ']' 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58982 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58982 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.070 killing process with pid 58982 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58982' 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58982 00:06:46.070 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58982 00:06:46.329 00:06:46.329 real 0m4.003s 00:06:46.329 user 0m4.446s 00:06:46.329 sys 0m1.181s 00:06:46.329 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.329 14:45:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.329 ************************************ 00:06:46.329 END TEST locking_app_on_unlocked_coremask 00:06:46.329 ************************************ 00:06:46.609 14:45:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.609 14:45:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.609 14:45:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.609 14:45:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.609 ************************************ 00:06:46.609 START TEST locking_app_on_locked_coremask 00:06:46.609 ************************************ 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59055 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59055 /var/tmp/spdk.sock 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59055 ']' 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.609 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.609 [2024-11-22 14:45:01.095339] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:46.609 [2024-11-22 14:45:01.095455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:06:46.609 [2024-11-22 14:45:01.243481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.889 [2024-11-22 14:45:01.310849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.889 [2024-11-22 14:45:01.387215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59063 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59063 /var/tmp/spdk2.sock 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59063 /var/tmp/spdk2.sock 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59063 /var/tmp/spdk2.sock 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59063 ']' 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.160 14:45:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.160 [2024-11-22 14:45:01.674253] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:47.160 [2024-11-22 14:45:01.674394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:06:47.418 [2024-11-22 14:45:01.847450] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59055 has claimed it. 00:06:47.418 [2024-11-22 14:45:01.847509] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.982 ERROR: process (pid: 59063) is no longer running 00:06:47.982 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59063) - No such process 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59055 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59055 00:06:47.982 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59055 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59055 ']' 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59055 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.238 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59055 00:06:48.495 killing process with pid 59055 00:06:48.495 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.495 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.495 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59055' 00:06:48.495 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59055 00:06:48.495 14:45:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59055 00:06:48.753 00:06:48.753 real 0m2.248s 00:06:48.753 user 0m2.543s 00:06:48.753 sys 0m0.642s 00:06:48.753 14:45:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.753 14:45:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.753 ************************************ 00:06:48.753 END TEST locking_app_on_locked_coremask 00:06:48.753 ************************************ 00:06:48.753 14:45:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.753 14:45:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.753 14:45:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.753 14:45:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.753 ************************************ 00:06:48.753 START TEST locking_overlapped_coremask 00:06:48.753 ************************************ 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59113 00:06:48.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59113 /var/tmp/spdk.sock 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59113 ']' 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.753 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.753 [2024-11-22 14:45:03.382289] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:48.753 [2024-11-22 14:45:03.382404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:06:49.010 [2024-11-22 14:45:03.526802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.010 [2024-11-22 14:45:03.591198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.010 [2024-11-22 14:45:03.591415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.010 [2024-11-22 14:45:03.591431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.010 [2024-11-22 14:45:03.671041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59125 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59125 /var/tmp/spdk2.sock 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59125 /var/tmp/spdk2.sock 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59125 /var/tmp/spdk2.sock 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59125 ']' 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.269 14:45:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.527 [2024-11-22 14:45:03.954207] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:49.527 [2024-11-22 14:45:03.954885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59125 ] 00:06:49.527 [2024-11-22 14:45:04.119153] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59113 has claimed it. 00:06:49.527 [2024-11-22 14:45:04.119233] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.095 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59125) - No such process 00:06:50.095 ERROR: process (pid: 59125) is no longer running 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59113 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59113 ']' 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59113 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59113 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59113' 00:06:50.095 killing process with pid 59113 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59113 00:06:50.095 14:45:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59113 00:06:50.662 00:06:50.662 real 0m1.801s 00:06:50.662 user 0m4.812s 00:06:50.662 sys 0m0.457s 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.662 ************************************ 00:06:50.662 END TEST locking_overlapped_coremask 00:06:50.662 ************************************ 00:06:50.662 14:45:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.662 14:45:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.662 14:45:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.662 14:45:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.662 ************************************ 00:06:50.662 START TEST locking_overlapped_coremask_via_rpc 00:06:50.662 ************************************ 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59165 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59165 /var/tmp/spdk.sock 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59165 ']' 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.662 14:45:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.662 [2024-11-22 14:45:05.242543] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:50.662 [2024-11-22 14:45:05.242673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:06:50.920 [2024-11-22 14:45:05.398230] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.920 [2024-11-22 14:45:05.398290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.920 [2024-11-22 14:45:05.476352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.920 [2024-11-22 14:45:05.476524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.920 [2024-11-22 14:45:05.476531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.920 [2024-11-22 14:45:05.557920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59183 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59183 /var/tmp/spdk2.sock 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.857 14:45:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.857 [2024-11-22 14:45:06.397918] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:51.857 [2024-11-22 14:45:06.398049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:06:52.114 [2024-11-22 14:45:06.566171] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.115 [2024-11-22 14:45:06.566276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.115 [2024-11-22 14:45:06.767549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.115 [2024-11-22 14:45:06.770633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.115 [2024-11-22 14:45:06.770636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.373 [2024-11-22 14:45:07.002604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.953 [2024-11-22 14:45:07.582509] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59165 has claimed it. 00:06:52.953 request: 00:06:52.953 { 00:06:52.953 "method": "framework_enable_cpumask_locks", 00:06:52.953 "req_id": 1 00:06:52.953 } 00:06:52.953 Got JSON-RPC error response 00:06:52.953 response: 00:06:52.953 { 00:06:52.953 "code": -32603, 00:06:52.953 "message": "Failed to claim CPU core: 2" 00:06:52.953 } 00:06:52.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59165 /var/tmp/spdk.sock 00:06:52.953 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59165 ']' 00:06:52.954 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.954 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.954 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.954 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.954 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59183 /var/tmp/spdk2.sock 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.532 14:45:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.791 ************************************ 00:06:53.791 END TEST locking_overlapped_coremask_via_rpc 00:06:53.791 ************************************ 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.791 00:06:53.791 real 0m3.130s 00:06:53.791 user 0m1.754s 00:06:53.791 sys 0m0.263s 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.791 14:45:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.791 14:45:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.791 14:45:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59165 ]] 00:06:53.791 14:45:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59165 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59165 ']' 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59165 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59165 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59165' 00:06:53.791 killing process with pid 59165 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59165 00:06:53.791 14:45:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59165 00:06:54.725 14:45:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59183 ]] 00:06:54.725 14:45:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59183 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59183 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59183 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.725 killing process with pid 59183 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59183' 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59183 00:06:54.725 14:45:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59183 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59165 ]] 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59165 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59165 ']' 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59165 00:06:55.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59165) - No such process 00:06:55.291 Process with pid 59165 is not found 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59165 is not found' 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59183 ]] 00:06:55.291 Process with pid 59183 is not found 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59183 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59183 00:06:55.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59183) - No such process 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59183 is not found' 00:06:55.291 14:45:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.291 00:06:55.291 real 0m20.051s 00:06:55.291 user 0m37.385s 00:06:55.291 sys 0m6.082s 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.291 14:45:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 ************************************ 00:06:55.291 END TEST cpu_locks 00:06:55.291 ************************************ 00:06:55.291 ************************************ 00:06:55.291 END TEST event 00:06:55.291 ************************************ 00:06:55.291 00:06:55.291 real 0m48.942s 00:06:55.291 user 1m38.279s 00:06:55.291 sys 0m10.039s 00:06:55.291 14:45:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.291 14:45:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 14:45:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.291 14:45:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.291 14:45:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.291 14:45:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 ************************************ 00:06:55.292 START TEST thread 00:06:55.292 ************************************ 00:06:55.292 14:45:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:55.292 * Looking for test storage... 00:06:55.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:55.292 14:45:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.292 14:45:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.292 14:45:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.551 14:45:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.551 14:45:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.551 14:45:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.551 14:45:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.551 14:45:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.551 14:45:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.551 14:45:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.551 14:45:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.551 14:45:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.551 14:45:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.551 14:45:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.551 14:45:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.551 14:45:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.551 14:45:10 thread -- scripts/common.sh@345 -- # : 1 00:06:55.551 14:45:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.551 14:45:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.551 14:45:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.551 14:45:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.551 14:45:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.551 14:45:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.551 14:45:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.551 14:45:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.551 14:45:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.551 14:45:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.551 14:45:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.551 14:45:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.551 14:45:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.551 14:45:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.551 14:45:10 thread -- scripts/common.sh@368 -- # return 0 00:06:55.551 14:45:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.551 14:45:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.551 --rc genhtml_branch_coverage=1 00:06:55.551 --rc genhtml_function_coverage=1 00:06:55.551 --rc genhtml_legend=1 00:06:55.551 --rc geninfo_all_blocks=1 00:06:55.552 --rc geninfo_unexecuted_blocks=1 00:06:55.552 00:06:55.552 ' 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.552 --rc genhtml_branch_coverage=1 00:06:55.552 --rc genhtml_function_coverage=1 00:06:55.552 --rc genhtml_legend=1 00:06:55.552 --rc geninfo_all_blocks=1 00:06:55.552 --rc geninfo_unexecuted_blocks=1 00:06:55.552 00:06:55.552 ' 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.552 --rc genhtml_branch_coverage=1 00:06:55.552 --rc genhtml_function_coverage=1 00:06:55.552 --rc genhtml_legend=1 00:06:55.552 --rc geninfo_all_blocks=1 00:06:55.552 --rc geninfo_unexecuted_blocks=1 00:06:55.552 00:06:55.552 ' 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.552 --rc genhtml_branch_coverage=1 00:06:55.552 --rc genhtml_function_coverage=1 00:06:55.552 --rc genhtml_legend=1 00:06:55.552 --rc geninfo_all_blocks=1 00:06:55.552 --rc geninfo_unexecuted_blocks=1 00:06:55.552 00:06:55.552 ' 00:06:55.552 14:45:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.552 14:45:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.552 ************************************ 00:06:55.552 START TEST thread_poller_perf 00:06:55.552 ************************************ 00:06:55.552 14:45:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.552 [2024-11-22 14:45:10.056880] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:55.552 [2024-11-22 14:45:10.057165] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59330 ] 00:06:55.552 [2024-11-22 14:45:10.212097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.811 [2024-11-22 14:45:10.274324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.811 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.746 [2024-11-22T14:45:11.411Z] ====================================== 00:06:56.746 [2024-11-22T14:45:11.411Z] busy:2213267686 (cyc) 00:06:56.746 [2024-11-22T14:45:11.411Z] total_run_count: 298000 00:06:56.746 [2024-11-22T14:45:11.411Z] tsc_hz: 2200000000 (cyc) 00:06:56.746 [2024-11-22T14:45:11.411Z] ====================================== 00:06:56.746 [2024-11-22T14:45:11.411Z] poller_cost: 7427 (cyc), 3375 (nsec) 00:06:56.746 ************************************ 00:06:56.746 END TEST thread_poller_perf 00:06:56.746 ************************************ 00:06:56.746 00:06:56.746 real 0m1.304s 00:06:56.746 user 0m1.143s 00:06:56.746 sys 0m0.052s 00:06:56.746 14:45:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.746 14:45:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.746 14:45:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.746 14:45:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:56.746 14:45:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.746 14:45:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.746 ************************************ 00:06:56.746 START TEST thread_poller_perf 00:06:56.746 ************************************ 00:06:56.746 14:45:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.746 [2024-11-22 14:45:11.398838] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:56.746 [2024-11-22 14:45:11.398949] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59360 ] 00:06:57.004 [2024-11-22 14:45:11.553070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.004 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.004 [2024-11-22 14:45:11.626043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.377 [2024-11-22T14:45:13.042Z] ====================================== 00:06:58.377 [2024-11-22T14:45:13.042Z] busy:2202733452 (cyc) 00:06:58.377 [2024-11-22T14:45:13.042Z] total_run_count: 3675000 00:06:58.377 [2024-11-22T14:45:13.042Z] tsc_hz: 2200000000 (cyc) 00:06:58.377 [2024-11-22T14:45:13.042Z] ====================================== 00:06:58.377 [2024-11-22T14:45:13.042Z] poller_cost: 599 (cyc), 272 (nsec) 00:06:58.377 ************************************ 00:06:58.377 END TEST thread_poller_perf 00:06:58.377 ************************************ 00:06:58.377 00:06:58.377 real 0m1.302s 00:06:58.377 user 0m1.139s 00:06:58.377 sys 0m0.053s 00:06:58.377 14:45:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.377 14:45:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 14:45:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.377 ************************************ 00:06:58.377 END TEST thread 00:06:58.377 ************************************ 00:06:58.377 00:06:58.377 real 0m2.899s 00:06:58.377 user 0m2.427s 00:06:58.377 sys 0m0.250s 00:06:58.377 14:45:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.377 14:45:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 14:45:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:58.377 14:45:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.377 14:45:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.377 14:45:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.377 14:45:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 ************************************ 00:06:58.377 START TEST app_cmdline 00:06:58.377 ************************************ 00:06:58.377 14:45:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:58.377 * Looking for test storage... 00:06:58.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:58.377 14:45:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.377 14:45:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.377 14:45:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.377 14:45:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.377 14:45:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.378 14:45:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 14:45:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.378 14:45:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59442 00:06:58.378 14:45:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59442 00:06:58.378 14:45:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59442 ']' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.378 14:45:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.378 [2024-11-22 14:45:13.012369] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:06:58.378 [2024-11-22 14:45:13.013066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:06:58.636 [2024-11-22 14:45:13.169350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.636 [2024-11-22 14:45:13.259873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.895 [2024-11-22 14:45:13.380083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.519 14:45:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.519 14:45:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.519 14:45:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:59.778 { 00:06:59.778 "version": "SPDK v25.01-pre git sha1 1e70ad0e1", 00:06:59.778 "fields": { 00:06:59.778 "major": 25, 00:06:59.778 "minor": 1, 00:06:59.778 "patch": 0, 00:06:59.778 "suffix": "-pre", 00:06:59.778 "commit": "1e70ad0e1" 00:06:59.778 } 00:06:59.778 } 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.778 14:45:14 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.778 14:45:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.778 14:45:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.778 14:45:14 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.036 14:45:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.036 14:45:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.036 14:45:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:00.036 14:45:14 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.294 request: 00:07:00.294 { 00:07:00.294 "method": "env_dpdk_get_mem_stats", 00:07:00.294 "req_id": 1 00:07:00.294 } 00:07:00.294 Got JSON-RPC error response 00:07:00.294 response: 00:07:00.294 { 00:07:00.294 "code": -32601, 00:07:00.294 "message": "Method not found" 00:07:00.294 } 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.294 14:45:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59442 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59442 ']' 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59442 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59442 00:07:00.294 killing process with pid 59442 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59442' 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 59442 00:07:00.294 14:45:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 59442 00:07:00.861 00:07:00.861 real 0m2.472s 00:07:00.861 user 0m3.073s 00:07:00.861 sys 0m0.604s 00:07:00.861 14:45:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.861 14:45:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.861 ************************************ 00:07:00.861 END TEST app_cmdline 00:07:00.861 ************************************ 00:07:00.861 14:45:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.861 14:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.861 14:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.861 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.861 ************************************ 00:07:00.861 START TEST version 00:07:00.861 ************************************ 00:07:00.861 14:45:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.861 * Looking for test storage... 00:07:00.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.861 14:45:15 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.861 14:45:15 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.861 14:45:15 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.861 14:45:15 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.861 14:45:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.861 14:45:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.861 14:45:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.861 14:45:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.861 14:45:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.861 14:45:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.861 14:45:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.861 14:45:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.861 14:45:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.861 14:45:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.861 14:45:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.861 14:45:15 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.861 14:45:15 version -- scripts/common.sh@345 -- # : 1 00:07:00.861 14:45:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.861 14:45:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.861 14:45:15 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.861 14:45:15 version -- scripts/common.sh@353 -- # local d=1 00:07:00.861 14:45:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.861 14:45:15 version -- scripts/common.sh@355 -- # echo 1 00:07:00.861 14:45:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.861 14:45:15 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.861 14:45:15 version -- scripts/common.sh@353 -- # local d=2 00:07:00.861 14:45:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.861 14:45:15 version -- scripts/common.sh@355 -- # echo 2 00:07:00.861 14:45:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.861 14:45:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.861 14:45:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.861 14:45:15 version -- scripts/common.sh@368 -- # return 0 00:07:00.862 14:45:15 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.862 14:45:15 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.862 --rc genhtml_branch_coverage=1 00:07:00.862 --rc genhtml_function_coverage=1 00:07:00.862 --rc genhtml_legend=1 00:07:00.862 --rc geninfo_all_blocks=1 00:07:00.862 --rc geninfo_unexecuted_blocks=1 00:07:00.862 00:07:00.862 ' 00:07:00.862 14:45:15 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.862 --rc genhtml_branch_coverage=1 00:07:00.862 --rc genhtml_function_coverage=1 00:07:00.862 --rc genhtml_legend=1 00:07:00.862 --rc geninfo_all_blocks=1 00:07:00.862 --rc geninfo_unexecuted_blocks=1 00:07:00.862 00:07:00.862 ' 00:07:00.862 14:45:15 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.862 --rc genhtml_branch_coverage=1 00:07:00.862 --rc genhtml_function_coverage=1 00:07:00.862 --rc genhtml_legend=1 00:07:00.862 --rc geninfo_all_blocks=1 00:07:00.862 --rc geninfo_unexecuted_blocks=1 00:07:00.862 00:07:00.862 ' 00:07:00.862 14:45:15 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.862 --rc genhtml_branch_coverage=1 00:07:00.862 --rc genhtml_function_coverage=1 00:07:00.862 --rc genhtml_legend=1 00:07:00.862 --rc geninfo_all_blocks=1 00:07:00.862 --rc geninfo_unexecuted_blocks=1 00:07:00.862 00:07:00.862 ' 00:07:00.862 14:45:15 version -- app/version.sh@17 -- # get_header_version major 00:07:00.862 14:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.862 14:45:15 version -- app/version.sh@17 -- # major=25 00:07:00.862 14:45:15 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.862 14:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.862 14:45:15 version -- app/version.sh@18 -- # minor=1 00:07:00.862 14:45:15 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.862 14:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.862 14:45:15 version -- app/version.sh@19 -- # patch=0 00:07:00.862 14:45:15 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.862 14:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.862 14:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.862 14:45:15 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.862 14:45:15 version -- app/version.sh@22 -- # version=25.1 00:07:00.862 14:45:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.862 14:45:15 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.862 14:45:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.862 14:45:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.120 14:45:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.120 14:45:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.120 00:07:01.120 real 0m0.272s 00:07:01.120 user 0m0.179s 00:07:01.120 sys 0m0.126s 00:07:01.120 ************************************ 00:07:01.120 END TEST version 00:07:01.120 ************************************ 00:07:01.120 14:45:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.120 14:45:15 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.120 14:45:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.120 14:45:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.120 14:45:15 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.120 14:45:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.120 14:45:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.120 14:45:15 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:01.120 14:45:15 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:01.120 14:45:15 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:01.120 14:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.120 14:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.120 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.120 ************************************ 00:07:01.120 START TEST spdk_dd 00:07:01.120 ************************************ 00:07:01.120 14:45:15 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:01.120 * Looking for test storage... 00:07:01.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:01.120 14:45:15 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.120 14:45:15 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.120 14:45:15 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.120 14:45:15 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:01.120 14:45:15 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.379 14:45:15 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.379 14:45:15 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:01.379 14:45:15 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:01.379 14:45:15 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:01.380 14:45:15 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.380 14:45:15 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.380 --rc genhtml_branch_coverage=1 00:07:01.380 --rc genhtml_function_coverage=1 00:07:01.380 --rc genhtml_legend=1 00:07:01.380 --rc geninfo_all_blocks=1 00:07:01.380 --rc geninfo_unexecuted_blocks=1 00:07:01.380 00:07:01.380 ' 00:07:01.380 14:45:15 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.380 --rc genhtml_branch_coverage=1 00:07:01.380 --rc genhtml_function_coverage=1 00:07:01.380 --rc genhtml_legend=1 00:07:01.380 --rc geninfo_all_blocks=1 00:07:01.380 --rc geninfo_unexecuted_blocks=1 00:07:01.380 00:07:01.380 ' 00:07:01.380 14:45:15 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.380 --rc genhtml_branch_coverage=1 00:07:01.380 --rc genhtml_function_coverage=1 00:07:01.380 --rc genhtml_legend=1 00:07:01.380 --rc geninfo_all_blocks=1 00:07:01.380 --rc geninfo_unexecuted_blocks=1 00:07:01.380 00:07:01.380 ' 00:07:01.380 14:45:15 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.380 --rc genhtml_branch_coverage=1 00:07:01.380 --rc genhtml_function_coverage=1 00:07:01.380 --rc genhtml_legend=1 00:07:01.380 --rc geninfo_all_blocks=1 00:07:01.380 --rc geninfo_unexecuted_blocks=1 00:07:01.380 00:07:01.380 ' 00:07:01.380 14:45:15 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.380 14:45:15 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.380 14:45:15 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.380 14:45:15 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.380 14:45:15 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.380 14:45:15 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:01.380 14:45:15 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.380 14:45:15 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:01.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:01.640 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:01.640 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:01.640 14:45:16 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:01.640 14:45:16 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:01.640 14:45:16 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:01.640 14:45:16 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.640 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:01.641 * spdk_dd linked to liburing 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:01.641 14:45:16 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:01.641 14:45:16 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:01.901 14:45:16 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:01.902 14:45:16 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:01.902 14:45:16 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:01.902 14:45:16 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:01.902 14:45:16 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:01.902 14:45:16 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:01.902 14:45:16 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:01.902 14:45:16 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:01.902 14:45:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:01.902 14:45:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.902 14:45:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:01.902 ************************************ 00:07:01.902 START TEST spdk_dd_basic_rw 00:07:01.902 ************************************ 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:01.902 * Looking for test storage... 00:07:01.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.902 --rc genhtml_branch_coverage=1 00:07:01.902 --rc genhtml_function_coverage=1 00:07:01.902 --rc genhtml_legend=1 00:07:01.902 --rc geninfo_all_blocks=1 00:07:01.902 --rc geninfo_unexecuted_blocks=1 00:07:01.902 00:07:01.902 ' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.902 --rc genhtml_branch_coverage=1 00:07:01.902 --rc genhtml_function_coverage=1 00:07:01.902 --rc genhtml_legend=1 00:07:01.902 --rc geninfo_all_blocks=1 00:07:01.902 --rc geninfo_unexecuted_blocks=1 00:07:01.902 00:07:01.902 ' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.902 --rc genhtml_branch_coverage=1 00:07:01.902 --rc genhtml_function_coverage=1 00:07:01.902 --rc genhtml_legend=1 00:07:01.902 --rc geninfo_all_blocks=1 00:07:01.902 --rc geninfo_unexecuted_blocks=1 00:07:01.902 00:07:01.902 ' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.902 --rc genhtml_branch_coverage=1 00:07:01.902 --rc genhtml_function_coverage=1 00:07:01.902 --rc genhtml_legend=1 00:07:01.902 --rc geninfo_all_blocks=1 00:07:01.902 --rc geninfo_unexecuted_blocks=1 00:07:01.902 00:07:01.902 ' 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.902 14:45:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:01.903 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:02.163 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:02.163 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.164 ************************************ 00:07:02.164 START TEST dd_bs_lt_native_bs 00:07:02.164 ************************************ 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.164 14:45:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:02.164 { 00:07:02.164 "subsystems": [ 00:07:02.164 { 00:07:02.164 "subsystem": "bdev", 00:07:02.164 "config": [ 00:07:02.164 { 00:07:02.164 "params": { 00:07:02.164 "trtype": "pcie", 00:07:02.164 "traddr": "0000:00:10.0", 00:07:02.164 "name": "Nvme0" 00:07:02.164 }, 00:07:02.164 "method": "bdev_nvme_attach_controller" 00:07:02.164 }, 00:07:02.164 { 00:07:02.164 "method": "bdev_wait_for_examine" 00:07:02.164 } 00:07:02.164 ] 00:07:02.164 } 00:07:02.164 ] 00:07:02.164 } 00:07:02.164 [2024-11-22 14:45:16.813448] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:02.164 [2024-11-22 14:45:16.813583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59794 ] 00:07:02.423 [2024-11-22 14:45:16.965729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.423 [2024-11-22 14:45:17.043086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.682 [2024-11-22 14:45:17.109102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.682 [2024-11-22 14:45:17.232593] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:02.682 [2024-11-22 14:45:17.232692] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.972 [2024-11-22 14:45:17.373201] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.972 ************************************ 00:07:02.972 END TEST dd_bs_lt_native_bs 00:07:02.972 ************************************ 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.972 00:07:02.972 real 0m0.699s 00:07:02.972 user 0m0.476s 00:07:02.972 sys 0m0.176s 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.972 ************************************ 00:07:02.972 START TEST dd_rw 00:07:02.972 ************************************ 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:02.972 14:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.909 14:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:03.909 14:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:03.909 14:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.909 14:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.909 [2024-11-22 14:45:18.353827] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:03.909 [2024-11-22 14:45:18.353949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:07:03.909 { 00:07:03.909 "subsystems": [ 00:07:03.909 { 00:07:03.909 "subsystem": "bdev", 00:07:03.909 "config": [ 00:07:03.909 { 00:07:03.909 "params": { 00:07:03.909 "trtype": "pcie", 00:07:03.909 "traddr": "0000:00:10.0", 00:07:03.909 "name": "Nvme0" 00:07:03.909 }, 00:07:03.909 "method": "bdev_nvme_attach_controller" 00:07:03.909 }, 00:07:03.909 { 00:07:03.909 "method": "bdev_wait_for_examine" 00:07:03.909 } 00:07:03.909 ] 00:07:03.909 } 00:07:03.909 ] 00:07:03.909 } 00:07:03.909 [2024-11-22 14:45:18.508231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.167 [2024-11-22 14:45:18.588427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.167 [2024-11-22 14:45:18.650814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.167  [2024-11-22T14:45:19.090Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:04.425 00:07:04.425 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:04.425 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.425 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.425 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.684 [2024-11-22 14:45:19.096270] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:04.684 [2024-11-22 14:45:19.096383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:07:04.684 { 00:07:04.684 "subsystems": [ 00:07:04.684 { 00:07:04.684 "subsystem": "bdev", 00:07:04.684 "config": [ 00:07:04.684 { 00:07:04.684 "params": { 00:07:04.684 "trtype": "pcie", 00:07:04.684 "traddr": "0000:00:10.0", 00:07:04.684 "name": "Nvme0" 00:07:04.684 }, 00:07:04.684 "method": "bdev_nvme_attach_controller" 00:07:04.684 }, 00:07:04.684 { 00:07:04.684 "method": "bdev_wait_for_examine" 00:07:04.684 } 00:07:04.684 ] 00:07:04.684 } 00:07:04.684 ] 00:07:04.684 } 00:07:04.684 [2024-11-22 14:45:19.248737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.684 [2024-11-22 14:45:19.324707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.943 [2024-11-22 14:45:19.414263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.943  [2024-11-22T14:45:19.866Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:05.201 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.201 14:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.460 { 00:07:05.460 "subsystems": [ 00:07:05.460 { 00:07:05.460 "subsystem": "bdev", 00:07:05.460 "config": [ 00:07:05.460 { 00:07:05.460 "params": { 00:07:05.460 "trtype": "pcie", 00:07:05.460 "traddr": "0000:00:10.0", 00:07:05.460 "name": "Nvme0" 00:07:05.460 }, 00:07:05.460 "method": "bdev_nvme_attach_controller" 00:07:05.460 }, 00:07:05.460 { 00:07:05.460 "method": "bdev_wait_for_examine" 00:07:05.460 } 00:07:05.460 ] 00:07:05.460 } 00:07:05.460 ] 00:07:05.460 } 00:07:05.460 [2024-11-22 14:45:19.899304] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:05.460 [2024-11-22 14:45:19.899470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59865 ] 00:07:05.460 [2024-11-22 14:45:20.053302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.719 [2024-11-22 14:45:20.147156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.719 [2024-11-22 14:45:20.226125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.719  [2024-11-22T14:45:20.644Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.979 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:05.979 14:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.918 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:06.919 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.919 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.919 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.919 { 00:07:06.919 "subsystems": [ 00:07:06.919 { 00:07:06.919 "subsystem": "bdev", 00:07:06.919 "config": [ 00:07:06.919 { 00:07:06.919 "params": { 00:07:06.919 "trtype": "pcie", 00:07:06.919 "traddr": "0000:00:10.0", 00:07:06.919 "name": "Nvme0" 00:07:06.919 }, 00:07:06.919 "method": "bdev_nvme_attach_controller" 00:07:06.919 }, 00:07:06.919 { 00:07:06.919 "method": "bdev_wait_for_examine" 00:07:06.919 } 00:07:06.919 ] 00:07:06.919 } 00:07:06.919 ] 00:07:06.919 } 00:07:06.919 [2024-11-22 14:45:21.295000] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:06.919 [2024-11-22 14:45:21.295113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:07:06.919 [2024-11-22 14:45:21.441778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.919 [2024-11-22 14:45:21.511096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.179 [2024-11-22 14:45:21.588763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.180  [2024-11-22T14:45:22.103Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:07.438 00:07:07.438 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:07.438 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:07.438 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.438 14:45:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.438 { 00:07:07.438 "subsystems": [ 00:07:07.438 { 00:07:07.438 "subsystem": "bdev", 00:07:07.438 "config": [ 00:07:07.438 { 00:07:07.438 "params": { 00:07:07.438 "trtype": "pcie", 00:07:07.438 "traddr": "0000:00:10.0", 00:07:07.438 "name": "Nvme0" 00:07:07.438 }, 00:07:07.438 "method": "bdev_nvme_attach_controller" 00:07:07.438 }, 00:07:07.438 { 00:07:07.438 "method": "bdev_wait_for_examine" 00:07:07.438 } 00:07:07.438 ] 00:07:07.438 } 00:07:07.438 ] 00:07:07.438 } 00:07:07.438 [2024-11-22 14:45:22.043712] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:07.438 [2024-11-22 14:45:22.043812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59903 ] 00:07:07.697 [2024-11-22 14:45:22.192544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.697 [2024-11-22 14:45:22.263952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.697 [2024-11-22 14:45:22.335721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.956  [2024-11-22T14:45:22.880Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:08.215 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.215 14:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.215 [2024-11-22 14:45:22.781596] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:08.215 [2024-11-22 14:45:22.782167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:07:08.215 { 00:07:08.215 "subsystems": [ 00:07:08.215 { 00:07:08.215 "subsystem": "bdev", 00:07:08.215 "config": [ 00:07:08.215 { 00:07:08.215 "params": { 00:07:08.215 "trtype": "pcie", 00:07:08.215 "traddr": "0000:00:10.0", 00:07:08.215 "name": "Nvme0" 00:07:08.215 }, 00:07:08.215 "method": "bdev_nvme_attach_controller" 00:07:08.215 }, 00:07:08.215 { 00:07:08.215 "method": "bdev_wait_for_examine" 00:07:08.215 } 00:07:08.215 ] 00:07:08.215 } 00:07:08.215 ] 00:07:08.215 } 00:07:08.474 [2024-11-22 14:45:22.927281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.474 [2024-11-22 14:45:23.001565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.474 [2024-11-22 14:45:23.073868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.733  [2024-11-22T14:45:23.656Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:08.991 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:08.991 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:08.992 14:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.559 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:09.559 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:09.559 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.559 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.559 [2024-11-22 14:45:24.113513] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:09.559 [2024-11-22 14:45:24.114063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:07:09.559 { 00:07:09.559 "subsystems": [ 00:07:09.559 { 00:07:09.559 "subsystem": "bdev", 00:07:09.559 "config": [ 00:07:09.559 { 00:07:09.559 "params": { 00:07:09.559 "trtype": "pcie", 00:07:09.559 "traddr": "0000:00:10.0", 00:07:09.559 "name": "Nvme0" 00:07:09.559 }, 00:07:09.559 "method": "bdev_nvme_attach_controller" 00:07:09.559 }, 00:07:09.559 { 00:07:09.559 "method": "bdev_wait_for_examine" 00:07:09.559 } 00:07:09.559 ] 00:07:09.559 } 00:07:09.559 ] 00:07:09.559 } 00:07:09.817 [2024-11-22 14:45:24.257254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.817 [2024-11-22 14:45:24.312931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.817 [2024-11-22 14:45:24.385147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.076  [2024-11-22T14:45:24.999Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:10.334 00:07:10.334 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:10.334 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:10.334 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.334 14:45:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.334 { 00:07:10.334 "subsystems": [ 00:07:10.334 { 00:07:10.334 "subsystem": "bdev", 00:07:10.334 "config": [ 00:07:10.334 { 00:07:10.334 "params": { 00:07:10.334 "trtype": "pcie", 00:07:10.334 "traddr": "0000:00:10.0", 00:07:10.334 "name": "Nvme0" 00:07:10.334 }, 00:07:10.334 "method": "bdev_nvme_attach_controller" 00:07:10.334 }, 00:07:10.334 { 00:07:10.334 "method": "bdev_wait_for_examine" 00:07:10.334 } 00:07:10.334 ] 00:07:10.334 } 00:07:10.334 ] 00:07:10.334 } 00:07:10.334 [2024-11-22 14:45:24.837779] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:10.334 [2024-11-22 14:45:24.837937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:07:10.334 [2024-11-22 14:45:24.992476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.592 [2024-11-22 14:45:25.054079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.592 [2024-11-22 14:45:25.125729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.592  [2024-11-22T14:45:25.516Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:10.851 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.851 14:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.108 [2024-11-22 14:45:25.553339] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:11.108 [2024-11-22 14:45:25.553447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:07:11.108 { 00:07:11.108 "subsystems": [ 00:07:11.108 { 00:07:11.108 "subsystem": "bdev", 00:07:11.108 "config": [ 00:07:11.108 { 00:07:11.108 "params": { 00:07:11.108 "trtype": "pcie", 00:07:11.108 "traddr": "0000:00:10.0", 00:07:11.108 "name": "Nvme0" 00:07:11.108 }, 00:07:11.108 "method": "bdev_nvme_attach_controller" 00:07:11.108 }, 00:07:11.108 { 00:07:11.108 "method": "bdev_wait_for_examine" 00:07:11.108 } 00:07:11.108 ] 00:07:11.108 } 00:07:11.108 ] 00:07:11.108 } 00:07:11.108 [2024-11-22 14:45:25.690476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.108 [2024-11-22 14:45:25.733945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.366 [2024-11-22 14:45:25.804125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.366  [2024-11-22T14:45:26.289Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.624 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.624 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.191 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:12.191 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:12.191 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.191 14:45:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.191 [2024-11-22 14:45:26.746207] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:12.191 { 00:07:12.191 "subsystems": [ 00:07:12.191 { 00:07:12.191 "subsystem": "bdev", 00:07:12.191 "config": [ 00:07:12.191 { 00:07:12.191 "params": { 00:07:12.191 "trtype": "pcie", 00:07:12.191 "traddr": "0000:00:10.0", 00:07:12.191 "name": "Nvme0" 00:07:12.191 }, 00:07:12.191 "method": "bdev_nvme_attach_controller" 00:07:12.191 }, 00:07:12.191 { 00:07:12.191 "method": "bdev_wait_for_examine" 00:07:12.191 } 00:07:12.191 ] 00:07:12.191 } 00:07:12.191 ] 00:07:12.191 } 00:07:12.191 [2024-11-22 14:45:26.746979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60002 ] 00:07:12.450 [2024-11-22 14:45:26.893200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.450 [2024-11-22 14:45:26.946142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.450 [2024-11-22 14:45:27.016339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.708  [2024-11-22T14:45:27.631Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:12.966 00:07:12.966 14:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:12.966 14:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:12.966 14:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.966 14:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.966 { 00:07:12.966 "subsystems": [ 00:07:12.966 { 00:07:12.966 "subsystem": "bdev", 00:07:12.966 "config": [ 00:07:12.966 { 00:07:12.966 "params": { 00:07:12.966 "trtype": "pcie", 00:07:12.966 "traddr": "0000:00:10.0", 00:07:12.966 "name": "Nvme0" 00:07:12.966 }, 00:07:12.966 "method": "bdev_nvme_attach_controller" 00:07:12.966 }, 00:07:12.966 { 00:07:12.966 "method": "bdev_wait_for_examine" 00:07:12.966 } 00:07:12.966 ] 00:07:12.966 } 00:07:12.966 ] 00:07:12.966 } 00:07:12.966 [2024-11-22 14:45:27.450662] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:12.966 [2024-11-22 14:45:27.450794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:07:12.966 [2024-11-22 14:45:27.592595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.224 [2024-11-22 14:45:27.639688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.224 [2024-11-22 14:45:27.711626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.224  [2024-11-22T14:45:28.147Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:13.482 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:13.482 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.483 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.483 { 00:07:13.483 "subsystems": [ 00:07:13.483 { 00:07:13.483 "subsystem": "bdev", 00:07:13.483 "config": [ 00:07:13.483 { 00:07:13.483 "params": { 00:07:13.483 "trtype": "pcie", 00:07:13.483 "traddr": "0000:00:10.0", 00:07:13.483 "name": "Nvme0" 00:07:13.483 }, 00:07:13.483 "method": "bdev_nvme_attach_controller" 00:07:13.483 }, 00:07:13.483 { 00:07:13.483 "method": "bdev_wait_for_examine" 00:07:13.483 } 00:07:13.483 ] 00:07:13.483 } 00:07:13.483 ] 00:07:13.483 } 00:07:13.483 [2024-11-22 14:45:28.133122] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:13.483 [2024-11-22 14:45:28.133241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60031 ] 00:07:13.741 [2024-11-22 14:45:28.278477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.741 [2024-11-22 14:45:28.357509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.000 [2024-11-22 14:45:28.437160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.000  [2024-11-22T14:45:28.927Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:14.262 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:14.262 14:45:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.839 14:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:14.839 14:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:14.839 14:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.839 14:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.839 { 00:07:14.839 "subsystems": [ 00:07:14.839 { 00:07:14.839 "subsystem": "bdev", 00:07:14.839 "config": [ 00:07:14.839 { 00:07:14.839 "params": { 00:07:14.839 "trtype": "pcie", 00:07:14.839 "traddr": "0000:00:10.0", 00:07:14.839 "name": "Nvme0" 00:07:14.839 }, 00:07:14.839 "method": "bdev_nvme_attach_controller" 00:07:14.839 }, 00:07:14.839 { 00:07:14.839 "method": "bdev_wait_for_examine" 00:07:14.839 } 00:07:14.839 ] 00:07:14.839 } 00:07:14.839 ] 00:07:14.839 } 00:07:14.839 [2024-11-22 14:45:29.403304] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:14.839 [2024-11-22 14:45:29.403426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:07:15.097 [2024-11-22 14:45:29.553617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.097 [2024-11-22 14:45:29.634930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.097 [2024-11-22 14:45:29.720176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.355  [2024-11-22T14:45:30.279Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:15.614 00:07:15.614 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:15.614 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:15.614 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.614 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.614 [2024-11-22 14:45:30.200310] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:15.614 [2024-11-22 14:45:30.200698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60069 ] 00:07:15.614 { 00:07:15.614 "subsystems": [ 00:07:15.614 { 00:07:15.614 "subsystem": "bdev", 00:07:15.614 "config": [ 00:07:15.614 { 00:07:15.614 "params": { 00:07:15.614 "trtype": "pcie", 00:07:15.614 "traddr": "0000:00:10.0", 00:07:15.614 "name": "Nvme0" 00:07:15.614 }, 00:07:15.614 "method": "bdev_nvme_attach_controller" 00:07:15.614 }, 00:07:15.614 { 00:07:15.614 "method": "bdev_wait_for_examine" 00:07:15.614 } 00:07:15.614 ] 00:07:15.614 } 00:07:15.614 ] 00:07:15.614 } 00:07:15.872 [2024-11-22 14:45:30.348882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.872 [2024-11-22 14:45:30.439187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.872 [2024-11-22 14:45:30.523009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.130  [2024-11-22T14:45:31.055Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:16.390 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.390 14:45:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.390 { 00:07:16.390 "subsystems": [ 00:07:16.390 { 00:07:16.390 "subsystem": "bdev", 00:07:16.390 "config": [ 00:07:16.390 { 00:07:16.390 "params": { 00:07:16.390 "trtype": "pcie", 00:07:16.390 "traddr": "0000:00:10.0", 00:07:16.390 "name": "Nvme0" 00:07:16.390 }, 00:07:16.390 "method": "bdev_nvme_attach_controller" 00:07:16.390 }, 00:07:16.390 { 00:07:16.390 "method": "bdev_wait_for_examine" 00:07:16.390 } 00:07:16.390 ] 00:07:16.390 } 00:07:16.390 ] 00:07:16.390 } 00:07:16.390 [2024-11-22 14:45:31.007847] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:16.390 [2024-11-22 14:45:31.008013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60092 ] 00:07:16.649 [2024-11-22 14:45:31.159651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.649 [2024-11-22 14:45:31.248665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.908 [2024-11-22 14:45:31.336753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.908  [2024-11-22T14:45:31.833Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:17.168 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:17.168 14:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.106 14:45:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:18.106 14:45:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:18.106 14:45:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.106 14:45:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.106 { 00:07:18.106 "subsystems": [ 00:07:18.106 { 00:07:18.106 "subsystem": "bdev", 00:07:18.106 "config": [ 00:07:18.106 { 00:07:18.106 "params": { 00:07:18.106 "trtype": "pcie", 00:07:18.106 "traddr": "0000:00:10.0", 00:07:18.106 "name": "Nvme0" 00:07:18.106 }, 00:07:18.106 "method": "bdev_nvme_attach_controller" 00:07:18.106 }, 00:07:18.106 { 00:07:18.106 "method": "bdev_wait_for_examine" 00:07:18.106 } 00:07:18.106 ] 00:07:18.106 } 00:07:18.106 ] 00:07:18.106 } 00:07:18.106 [2024-11-22 14:45:32.482434] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:18.106 [2024-11-22 14:45:32.482554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:07:18.106 [2024-11-22 14:45:32.634532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.106 [2024-11-22 14:45:32.724277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.366 [2024-11-22 14:45:32.810168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.366  [2024-11-22T14:45:33.290Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:18.625 00:07:18.625 14:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:18.625 14:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:18.625 14:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.625 14:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.625 { 00:07:18.625 "subsystems": [ 00:07:18.625 { 00:07:18.625 "subsystem": "bdev", 00:07:18.625 "config": [ 00:07:18.625 { 00:07:18.625 "params": { 00:07:18.625 "trtype": "pcie", 00:07:18.625 "traddr": "0000:00:10.0", 00:07:18.625 "name": "Nvme0" 00:07:18.625 }, 00:07:18.625 "method": "bdev_nvme_attach_controller" 00:07:18.625 }, 00:07:18.625 { 00:07:18.625 "method": "bdev_wait_for_examine" 00:07:18.625 } 00:07:18.625 ] 00:07:18.625 } 00:07:18.625 ] 00:07:18.625 } 00:07:18.883 [2024-11-22 14:45:33.292324] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:18.884 [2024-11-22 14:45:33.292591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60130 ] 00:07:18.884 [2024-11-22 14:45:33.441752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.884 [2024-11-22 14:45:33.523157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.142 [2024-11-22 14:45:33.608089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.142  [2024-11-22T14:45:34.065Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:19.400 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:19.400 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:19.401 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.401 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:19.401 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.401 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.659 [2024-11-22 14:45:34.094345] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:19.659 [2024-11-22 14:45:34.094447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:07:19.659 { 00:07:19.659 "subsystems": [ 00:07:19.659 { 00:07:19.659 "subsystem": "bdev", 00:07:19.659 "config": [ 00:07:19.659 { 00:07:19.659 "params": { 00:07:19.659 "trtype": "pcie", 00:07:19.659 "traddr": "0000:00:10.0", 00:07:19.659 "name": "Nvme0" 00:07:19.659 }, 00:07:19.659 "method": "bdev_nvme_attach_controller" 00:07:19.659 }, 00:07:19.659 { 00:07:19.659 "method": "bdev_wait_for_examine" 00:07:19.659 } 00:07:19.659 ] 00:07:19.659 } 00:07:19.659 ] 00:07:19.659 } 00:07:19.659 [2024-11-22 14:45:34.242511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.659 [2024-11-22 14:45:34.314593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.918 [2024-11-22 14:45:34.395348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.918  [2024-11-22T14:45:34.842Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.177 00:07:20.437 00:07:20.437 real 0m17.341s 00:07:20.437 user 0m12.576s 00:07:20.437 sys 0m7.358s 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 ************************************ 00:07:20.437 END TEST dd_rw 00:07:20.437 ************************************ 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 ************************************ 00:07:20.437 START TEST dd_rw_offset 00:07:20.437 ************************************ 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=tpqji1997lfht6pmn7amcv4e1rgm2m0i9ebs25jgr2sn5ahihhn01wjrfqvygfs1txnofm0npquo74ybc8a6mgd2fbg15tw0ls8abrh484qemyw3sxgm23q569j2y0u6uvlvp3z0rah6bhy7eynht8vldy8k3gpn2c9tlo0ai92am1shv2dk71hqechdvo5qah4nh0e3izo4kpmm776p91x9bcqwf3nffle8xc37a66tqipyuo970ydb33bl3t1sykwt75axsr5wdfafvl5196fzrxy2lc0m6snw656f2122a2935oz4mq6dummms6up8npht3m8p1qb4i90lkap445icl93fxdvz6vz48dfwlzu7ixsfiq13axap12zsjyr04xib3j9tf1f51o5n3dj2691jgw5979svnqdemlwhzvmi4wv9tyrfuw0dgtodwqzw1e97fvajcx4wuoixcnz7wstigw72yia22mxzcjccyw2xa3j2u6kghgaunhtk0x3yfuiegd3camahd4uzbd75njigsfehj4fkkc8t3rhxpay9xy3if0vwg98kpsxho9ijyao4aalddx7docj4r87yuw3ga7qzzcvmcplp2jmm3jtwheql1yinyowm1vlali60327x83ez7g7ooto4jtljfm1kch03haiqrvuiyfbpqbs6pz8deex1iyd7rtwu699go1cps1ehuaizkwq7u69e41wpxqmmno8oavwpzu89gzczwacpk376qj3a0fi1n7jlv1sgq78lt59ehb401fx2rloyktvzzyg3nfc2fm0pzmbkv3wzl1k5a7zdcn7r8v4gb29w570u6r9b0q4s6cngtcjsxqvxz4idmr3h7r51udzdq9zr9q6otd4le7ecvxjg7y0l41gsuw8b46nnf7y2nebcp8gfadwkw7sogim3fyku0e2arapzgj4vsng2wjqp3qi9qit3dror6j4te42k459kzhqimxsapca9d9oefwn19rpdbu6bvhqlnnazlfjvesmohfv78av1bapj1111rk1zwhx4yor8gpdj3s6aqkuwnofqs8uuxuyhzejs2602fslr4e178roumgua8rz1t9hkqrnp75c6awlarjx770nlpbd67ncz3q2k0nl4t4d6l347dsyqbuvlkdes4azhbdgazq9918qgza4osi61mpfo5k2f7eaatr9rnokbs5et7cvf4mlh34uag75alm99s0665y032ul1f3jizjdl5sydvdmmxgt2tmc4oaajxmzzijc6ilesgguwp1k084wmse8xmh0zushlc4xzdekdjm1j9ofp2wdhfyzd8xr658b6ef2puce15d5hs4i44l3t57yjyefd89nv412d75bkjwuyy2pjs0dfqhb5bdjpheo4q3b55u3xjrdzb6meqaf9nuw06dd3ir585r71douk89lr1qzgpz66iq3cmdw8fc3qa7hd94525tt4ff6qo6j5htpno19gjl0m8lsa4h3cms5ou275k6cr8azhvnbeg1xcxb4miu4s97ye3l0e4sd5ucrq96e5rlgjn5eqw459mvrh8t72q6odo1q97i0n95ev2utmsyzbp3mmmh0fx02tchzgqot9uxt4aq0i20ps45tov3qmshy1c9kelz2u0mig8oz4x9zlwkllm4k83uarpn3e748ck6l3omgnhvp9zww9ajkv1cuvmh8ij4at3lvmiq46xb5xnzn1ku11hbps9xat4tbe1s7mwfm5oi8we7dv77p5vija02x42oj6unpc2cu1856qs6c2wy09y5d7ru0hygnuibx90vs7e62n6i07iu2h5v5s5z40fdn9y2zj1eh8eotxam8yd3e0o40m6hqguwegzopmuu84fcb6u64wad1q5m93scdhi7or3zxz2exsmcg37jy8tv5ip4vzii6mwxci6ljipgcbcdfi5omfuolyal1vzrkd75ebqswdtejwj8otlty8qbiivn1lhuboqa6zn8oyck72qdepcylqkh2qdluhxn9jc24ajsxnjancvzhhj0ki7xqedbpdjsemlhirp6yjzqqssfdg36lfhzaklksjh5vtvrsvj4eg8u32z0t6uf5s6bn8fw3i4w9zkc58v5ke88oxz65sl1f8pj25whgrl58p2nmdzr641bskg0uhvu3058e6vbxm864bdi1d6o5o3x75lk9kqt9jjshyy01ud7yng287e5smla98f43j1wp1iaeg09vzx7i9cvns9prthiifxe6bl64fiv18m08h72m4dqachrjmug2jtv0g2y3n9d7e75njgo29ng5m5mdptsaa0s4oi6zzl8srf8y74mjrxhyetckqq2b3gxkrmapn6yolhag84i0h9dhis2d8ee0ow0oegmtj9mxalvs1j1cnw7k0julxqwguwrw1z0vmro2w4efzjqbcolco9379on0zu575xcqebmbc6qhk9k42rc3agwlfigf1ow4vh9dybiivdyg4efymixztfis63m9ah3xic7j215k9m0zl8supz3wcikzq3c6ce9gnzdss1qxfpk4ol069bfgo2m7f2eia7fv39zji3i67xefrlhrav1s813kidmyx2gd49kgtxm46qm0ydkjj3nesnlgg3j6duns8joozmreyoedon3bbjmnynm4fw7t12e7howluebjijl9rqrluiu5trbmzzsbyyj9bdgorc5hhu56p7ffi9orl7ndtbawb786cyx7ja7pkmvwuen1gnpa94ty938zfbn4da1thuxq0esofc2ib82zcnozo29oapr4vs86r24unkxevv4agabzikz2yxdxb927zvv5ruoeyc9x79e7td8az7supmv481pbamb5xmhu0aeaghsrl3mb8zkufkkzb12sj2xp4tq6a8hsa05lyfti09xjuy5qbphnwtxnj3bqbtt2kgg264a8sd6cxnflsiya1u1dh5z2ce2wf925rg3awiz762caf40y08kr0otjngwxgq3h638r0pxn7mb2zu4ktobp68m9p6nm1gs7yco1h4sy0s28guk3vy9brpmicb97p99z8ajqxp1qp96hfymzut39h11n0k190j22044jbtv0e3equ49ms7cei55e2134g4jmrqldhw7n88733iyy8wm7shfareryqc3fucrje45lmlrnb5qhfhhcn6cnnlpiii9cy28ddbeay5hmjwjknr02mngp2001ebiod6vaay2cerv0z3i3mni5uj694pfgagdt8dymsm71tnvvpdky4xiklbgh5372716t18zq0q3ngffl4x6dsa6zhr3t88d91adb8o1zc7312fzphe3i048uafxnr757d55tt5uc4fief672lljln8hq8d5vmvte60vtv8j8oidmqxayoyp9772cl3g69scvvnzu2x2hwkihwe4n6sfr6afgtuuj3dezkfjns83l1u9wj7of6fszh9ge3e05s08ed4whe2hrz1qlyy2qj4l8jtn07aher3sp9lpvmz15ih63zl25jginfkuc2awakvu11xwtjnd7f9kgef844j6d4fm5rhbw8xzn9cg7qqywhg1jk0hup0yccz3cmmygbynn62xshkocbk9xwnowpvmby1rs2f5nzq1xr33k1k5ey3lcdakb5xaelr67vdeydir7eksth8gbfuoq3hwgopxli5ti4di80picyxvf3sct2wlsrvwd3w3k6cx100qjupquq6wbd477bxk9xl9oaqci4fg8ex0a1apd8v89pu304hrcv2iuj0t7jdsq8yq54j36l87vwb3dw4npbyuhv0tpg1xq74p7439j35bwd50ovddjb56ukt97tuy7b0qoxg5ogl35bscpql8kbot1ph5eru11wbe0u5wgymka7dgqnp8z6i777unnprpbizovq6c5cdcw8ugw1krsh8hh6w3ge6y1pws49m2h7412ycki4gftony66rw9jmb4aj5yc6orzny14eqowedgfc8f9png49z4xsphpi7hwuzdn6kqmvc5v42u5oo3kqu0zdsga6qjbfzdjja9tzns 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:20.437 14:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 { 00:07:20.437 "subsystems": [ 00:07:20.437 { 00:07:20.437 "subsystem": "bdev", 00:07:20.437 "config": [ 00:07:20.437 { 00:07:20.437 "params": { 00:07:20.437 "trtype": "pcie", 00:07:20.437 "traddr": "0000:00:10.0", 00:07:20.437 "name": "Nvme0" 00:07:20.437 }, 00:07:20.437 "method": "bdev_nvme_attach_controller" 00:07:20.437 }, 00:07:20.437 { 00:07:20.437 "method": "bdev_wait_for_examine" 00:07:20.437 } 00:07:20.437 ] 00:07:20.437 } 00:07:20.437 ] 00:07:20.437 } 00:07:20.437 [2024-11-22 14:45:35.006480] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:20.437 [2024-11-22 14:45:35.006606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:07:20.697 [2024-11-22 14:45:35.156108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.697 [2024-11-22 14:45:35.241222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.697 [2024-11-22 14:45:35.320036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.956  [2024-11-22T14:45:35.880Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:21.215 00:07:21.215 14:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:21.215 14:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:21.215 14:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:21.215 14:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 { 00:07:21.215 "subsystems": [ 00:07:21.215 { 00:07:21.215 "subsystem": "bdev", 00:07:21.215 "config": [ 00:07:21.215 { 00:07:21.215 "params": { 00:07:21.215 "trtype": "pcie", 00:07:21.215 "traddr": "0000:00:10.0", 00:07:21.215 "name": "Nvme0" 00:07:21.215 }, 00:07:21.215 "method": "bdev_nvme_attach_controller" 00:07:21.215 }, 00:07:21.215 { 00:07:21.215 "method": "bdev_wait_for_examine" 00:07:21.215 } 00:07:21.215 ] 00:07:21.215 } 00:07:21.215 ] 00:07:21.215 } 00:07:21.215 [2024-11-22 14:45:35.797987] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:21.215 [2024-11-22 14:45:35.798133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60206 ] 00:07:21.474 [2024-11-22 14:45:35.946514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.474 [2024-11-22 14:45:36.025880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.474 [2024-11-22 14:45:36.106233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.733  [2024-11-22T14:45:36.658Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:21.993 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ tpqji1997lfht6pmn7amcv4e1rgm2m0i9ebs25jgr2sn5ahihhn01wjrfqvygfs1txnofm0npquo74ybc8a6mgd2fbg15tw0ls8abrh484qemyw3sxgm23q569j2y0u6uvlvp3z0rah6bhy7eynht8vldy8k3gpn2c9tlo0ai92am1shv2dk71hqechdvo5qah4nh0e3izo4kpmm776p91x9bcqwf3nffle8xc37a66tqipyuo970ydb33bl3t1sykwt75axsr5wdfafvl5196fzrxy2lc0m6snw656f2122a2935oz4mq6dummms6up8npht3m8p1qb4i90lkap445icl93fxdvz6vz48dfwlzu7ixsfiq13axap12zsjyr04xib3j9tf1f51o5n3dj2691jgw5979svnqdemlwhzvmi4wv9tyrfuw0dgtodwqzw1e97fvajcx4wuoixcnz7wstigw72yia22mxzcjccyw2xa3j2u6kghgaunhtk0x3yfuiegd3camahd4uzbd75njigsfehj4fkkc8t3rhxpay9xy3if0vwg98kpsxho9ijyao4aalddx7docj4r87yuw3ga7qzzcvmcplp2jmm3jtwheql1yinyowm1vlali60327x83ez7g7ooto4jtljfm1kch03haiqrvuiyfbpqbs6pz8deex1iyd7rtwu699go1cps1ehuaizkwq7u69e41wpxqmmno8oavwpzu89gzczwacpk376qj3a0fi1n7jlv1sgq78lt59ehb401fx2rloyktvzzyg3nfc2fm0pzmbkv3wzl1k5a7zdcn7r8v4gb29w570u6r9b0q4s6cngtcjsxqvxz4idmr3h7r51udzdq9zr9q6otd4le7ecvxjg7y0l41gsuw8b46nnf7y2nebcp8gfadwkw7sogim3fyku0e2arapzgj4vsng2wjqp3qi9qit3dror6j4te42k459kzhqimxsapca9d9oefwn19rpdbu6bvhqlnnazlfjvesmohfv78av1bapj1111rk1zwhx4yor8gpdj3s6aqkuwnofqs8uuxuyhzejs2602fslr4e178roumgua8rz1t9hkqrnp75c6awlarjx770nlpbd67ncz3q2k0nl4t4d6l347dsyqbuvlkdes4azhbdgazq9918qgza4osi61mpfo5k2f7eaatr9rnokbs5et7cvf4mlh34uag75alm99s0665y032ul1f3jizjdl5sydvdmmxgt2tmc4oaajxmzzijc6ilesgguwp1k084wmse8xmh0zushlc4xzdekdjm1j9ofp2wdhfyzd8xr658b6ef2puce15d5hs4i44l3t57yjyefd89nv412d75bkjwuyy2pjs0dfqhb5bdjpheo4q3b55u3xjrdzb6meqaf9nuw06dd3ir585r71douk89lr1qzgpz66iq3cmdw8fc3qa7hd94525tt4ff6qo6j5htpno19gjl0m8lsa4h3cms5ou275k6cr8azhvnbeg1xcxb4miu4s97ye3l0e4sd5ucrq96e5rlgjn5eqw459mvrh8t72q6odo1q97i0n95ev2utmsyzbp3mmmh0fx02tchzgqot9uxt4aq0i20ps45tov3qmshy1c9kelz2u0mig8oz4x9zlwkllm4k83uarpn3e748ck6l3omgnhvp9zww9ajkv1cuvmh8ij4at3lvmiq46xb5xnzn1ku11hbps9xat4tbe1s7mwfm5oi8we7dv77p5vija02x42oj6unpc2cu1856qs6c2wy09y5d7ru0hygnuibx90vs7e62n6i07iu2h5v5s5z40fdn9y2zj1eh8eotxam8yd3e0o40m6hqguwegzopmuu84fcb6u64wad1q5m93scdhi7or3zxz2exsmcg37jy8tv5ip4vzii6mwxci6ljipgcbcdfi5omfuolyal1vzrkd75ebqswdtejwj8otlty8qbiivn1lhuboqa6zn8oyck72qdepcylqkh2qdluhxn9jc24ajsxnjancvzhhj0ki7xqedbpdjsemlhirp6yjzqqssfdg36lfhzaklksjh5vtvrsvj4eg8u32z0t6uf5s6bn8fw3i4w9zkc58v5ke88oxz65sl1f8pj25whgrl58p2nmdzr641bskg0uhvu3058e6vbxm864bdi1d6o5o3x75lk9kqt9jjshyy01ud7yng287e5smla98f43j1wp1iaeg09vzx7i9cvns9prthiifxe6bl64fiv18m08h72m4dqachrjmug2jtv0g2y3n9d7e75njgo29ng5m5mdptsaa0s4oi6zzl8srf8y74mjrxhyetckqq2b3gxkrmapn6yolhag84i0h9dhis2d8ee0ow0oegmtj9mxalvs1j1cnw7k0julxqwguwrw1z0vmro2w4efzjqbcolco9379on0zu575xcqebmbc6qhk9k42rc3agwlfigf1ow4vh9dybiivdyg4efymixztfis63m9ah3xic7j215k9m0zl8supz3wcikzq3c6ce9gnzdss1qxfpk4ol069bfgo2m7f2eia7fv39zji3i67xefrlhrav1s813kidmyx2gd49kgtxm46qm0ydkjj3nesnlgg3j6duns8joozmreyoedon3bbjmnynm4fw7t12e7howluebjijl9rqrluiu5trbmzzsbyyj9bdgorc5hhu56p7ffi9orl7ndtbawb786cyx7ja7pkmvwuen1gnpa94ty938zfbn4da1thuxq0esofc2ib82zcnozo29oapr4vs86r24unkxevv4agabzikz2yxdxb927zvv5ruoeyc9x79e7td8az7supmv481pbamb5xmhu0aeaghsrl3mb8zkufkkzb12sj2xp4tq6a8hsa05lyfti09xjuy5qbphnwtxnj3bqbtt2kgg264a8sd6cxnflsiya1u1dh5z2ce2wf925rg3awiz762caf40y08kr0otjngwxgq3h638r0pxn7mb2zu4ktobp68m9p6nm1gs7yco1h4sy0s28guk3vy9brpmicb97p99z8ajqxp1qp96hfymzut39h11n0k190j22044jbtv0e3equ49ms7cei55e2134g4jmrqldhw7n88733iyy8wm7shfareryqc3fucrje45lmlrnb5qhfhhcn6cnnlpiii9cy28ddbeay5hmjwjknr02mngp2001ebiod6vaay2cerv0z3i3mni5uj694pfgagdt8dymsm71tnvvpdky4xiklbgh5372716t18zq0q3ngffl4x6dsa6zhr3t88d91adb8o1zc7312fzphe3i048uafxnr757d55tt5uc4fief672lljln8hq8d5vmvte60vtv8j8oidmqxayoyp9772cl3g69scvvnzu2x2hwkihwe4n6sfr6afgtuuj3dezkfjns83l1u9wj7of6fszh9ge3e05s08ed4whe2hrz1qlyy2qj4l8jtn07aher3sp9lpvmz15ih63zl25jginfkuc2awakvu11xwtjnd7f9kgef844j6d4fm5rhbw8xzn9cg7qqywhg1jk0hup0yccz3cmmygbynn62xshkocbk9xwnowpvmby1rs2f5nzq1xr33k1k5ey3lcdakb5xaelr67vdeydir7eksth8gbfuoq3hwgopxli5ti4di80picyxvf3sct2wlsrvwd3w3k6cx100qjupquq6wbd477bxk9xl9oaqci4fg8ex0a1apd8v89pu304hrcv2iuj0t7jdsq8yq54j36l87vwb3dw4npbyuhv0tpg1xq74p7439j35bwd50ovddjb56ukt97tuy7b0qoxg5ogl35bscpql8kbot1ph5eru11wbe0u5wgymka7dgqnp8z6i777unnprpbizovq6c5cdcw8ugw1krsh8hh6w3ge6y1pws49m2h7412ycki4gftony66rw9jmb4aj5yc6orzny14eqowedgfc8f9png49z4xsphpi7hwuzdn6kqmvc5v42u5oo3kqu0zdsga6qjbfzdjja9tzns == \t\p\q\j\i\1\9\9\7\l\f\h\t\6\p\m\n\7\a\m\c\v\4\e\1\r\g\m\2\m\0\i\9\e\b\s\2\5\j\g\r\2\s\n\5\a\h\i\h\h\n\0\1\w\j\r\f\q\v\y\g\f\s\1\t\x\n\o\f\m\0\n\p\q\u\o\7\4\y\b\c\8\a\6\m\g\d\2\f\b\g\1\5\t\w\0\l\s\8\a\b\r\h\4\8\4\q\e\m\y\w\3\s\x\g\m\2\3\q\5\6\9\j\2\y\0\u\6\u\v\l\v\p\3\z\0\r\a\h\6\b\h\y\7\e\y\n\h\t\8\v\l\d\y\8\k\3\g\p\n\2\c\9\t\l\o\0\a\i\9\2\a\m\1\s\h\v\2\d\k\7\1\h\q\e\c\h\d\v\o\5\q\a\h\4\n\h\0\e\3\i\z\o\4\k\p\m\m\7\7\6\p\9\1\x\9\b\c\q\w\f\3\n\f\f\l\e\8\x\c\3\7\a\6\6\t\q\i\p\y\u\o\9\7\0\y\d\b\3\3\b\l\3\t\1\s\y\k\w\t\7\5\a\x\s\r\5\w\d\f\a\f\v\l\5\1\9\6\f\z\r\x\y\2\l\c\0\m\6\s\n\w\6\5\6\f\2\1\2\2\a\2\9\3\5\o\z\4\m\q\6\d\u\m\m\m\s\6\u\p\8\n\p\h\t\3\m\8\p\1\q\b\4\i\9\0\l\k\a\p\4\4\5\i\c\l\9\3\f\x\d\v\z\6\v\z\4\8\d\f\w\l\z\u\7\i\x\s\f\i\q\1\3\a\x\a\p\1\2\z\s\j\y\r\0\4\x\i\b\3\j\9\t\f\1\f\5\1\o\5\n\3\d\j\2\6\9\1\j\g\w\5\9\7\9\s\v\n\q\d\e\m\l\w\h\z\v\m\i\4\w\v\9\t\y\r\f\u\w\0\d\g\t\o\d\w\q\z\w\1\e\9\7\f\v\a\j\c\x\4\w\u\o\i\x\c\n\z\7\w\s\t\i\g\w\7\2\y\i\a\2\2\m\x\z\c\j\c\c\y\w\2\x\a\3\j\2\u\6\k\g\h\g\a\u\n\h\t\k\0\x\3\y\f\u\i\e\g\d\3\c\a\m\a\h\d\4\u\z\b\d\7\5\n\j\i\g\s\f\e\h\j\4\f\k\k\c\8\t\3\r\h\x\p\a\y\9\x\y\3\i\f\0\v\w\g\9\8\k\p\s\x\h\o\9\i\j\y\a\o\4\a\a\l\d\d\x\7\d\o\c\j\4\r\8\7\y\u\w\3\g\a\7\q\z\z\c\v\m\c\p\l\p\2\j\m\m\3\j\t\w\h\e\q\l\1\y\i\n\y\o\w\m\1\v\l\a\l\i\6\0\3\2\7\x\8\3\e\z\7\g\7\o\o\t\o\4\j\t\l\j\f\m\1\k\c\h\0\3\h\a\i\q\r\v\u\i\y\f\b\p\q\b\s\6\p\z\8\d\e\e\x\1\i\y\d\7\r\t\w\u\6\9\9\g\o\1\c\p\s\1\e\h\u\a\i\z\k\w\q\7\u\6\9\e\4\1\w\p\x\q\m\m\n\o\8\o\a\v\w\p\z\u\8\9\g\z\c\z\w\a\c\p\k\3\7\6\q\j\3\a\0\f\i\1\n\7\j\l\v\1\s\g\q\7\8\l\t\5\9\e\h\b\4\0\1\f\x\2\r\l\o\y\k\t\v\z\z\y\g\3\n\f\c\2\f\m\0\p\z\m\b\k\v\3\w\z\l\1\k\5\a\7\z\d\c\n\7\r\8\v\4\g\b\2\9\w\5\7\0\u\6\r\9\b\0\q\4\s\6\c\n\g\t\c\j\s\x\q\v\x\z\4\i\d\m\r\3\h\7\r\5\1\u\d\z\d\q\9\z\r\9\q\6\o\t\d\4\l\e\7\e\c\v\x\j\g\7\y\0\l\4\1\g\s\u\w\8\b\4\6\n\n\f\7\y\2\n\e\b\c\p\8\g\f\a\d\w\k\w\7\s\o\g\i\m\3\f\y\k\u\0\e\2\a\r\a\p\z\g\j\4\v\s\n\g\2\w\j\q\p\3\q\i\9\q\i\t\3\d\r\o\r\6\j\4\t\e\4\2\k\4\5\9\k\z\h\q\i\m\x\s\a\p\c\a\9\d\9\o\e\f\w\n\1\9\r\p\d\b\u\6\b\v\h\q\l\n\n\a\z\l\f\j\v\e\s\m\o\h\f\v\7\8\a\v\1\b\a\p\j\1\1\1\1\r\k\1\z\w\h\x\4\y\o\r\8\g\p\d\j\3\s\6\a\q\k\u\w\n\o\f\q\s\8\u\u\x\u\y\h\z\e\j\s\2\6\0\2\f\s\l\r\4\e\1\7\8\r\o\u\m\g\u\a\8\r\z\1\t\9\h\k\q\r\n\p\7\5\c\6\a\w\l\a\r\j\x\7\7\0\n\l\p\b\d\6\7\n\c\z\3\q\2\k\0\n\l\4\t\4\d\6\l\3\4\7\d\s\y\q\b\u\v\l\k\d\e\s\4\a\z\h\b\d\g\a\z\q\9\9\1\8\q\g\z\a\4\o\s\i\6\1\m\p\f\o\5\k\2\f\7\e\a\a\t\r\9\r\n\o\k\b\s\5\e\t\7\c\v\f\4\m\l\h\3\4\u\a\g\7\5\a\l\m\9\9\s\0\6\6\5\y\0\3\2\u\l\1\f\3\j\i\z\j\d\l\5\s\y\d\v\d\m\m\x\g\t\2\t\m\c\4\o\a\a\j\x\m\z\z\i\j\c\6\i\l\e\s\g\g\u\w\p\1\k\0\8\4\w\m\s\e\8\x\m\h\0\z\u\s\h\l\c\4\x\z\d\e\k\d\j\m\1\j\9\o\f\p\2\w\d\h\f\y\z\d\8\x\r\6\5\8\b\6\e\f\2\p\u\c\e\1\5\d\5\h\s\4\i\4\4\l\3\t\5\7\y\j\y\e\f\d\8\9\n\v\4\1\2\d\7\5\b\k\j\w\u\y\y\2\p\j\s\0\d\f\q\h\b\5\b\d\j\p\h\e\o\4\q\3\b\5\5\u\3\x\j\r\d\z\b\6\m\e\q\a\f\9\n\u\w\0\6\d\d\3\i\r\5\8\5\r\7\1\d\o\u\k\8\9\l\r\1\q\z\g\p\z\6\6\i\q\3\c\m\d\w\8\f\c\3\q\a\7\h\d\9\4\5\2\5\t\t\4\f\f\6\q\o\6\j\5\h\t\p\n\o\1\9\g\j\l\0\m\8\l\s\a\4\h\3\c\m\s\5\o\u\2\7\5\k\6\c\r\8\a\z\h\v\n\b\e\g\1\x\c\x\b\4\m\i\u\4\s\9\7\y\e\3\l\0\e\4\s\d\5\u\c\r\q\9\6\e\5\r\l\g\j\n\5\e\q\w\4\5\9\m\v\r\h\8\t\7\2\q\6\o\d\o\1\q\9\7\i\0\n\9\5\e\v\2\u\t\m\s\y\z\b\p\3\m\m\m\h\0\f\x\0\2\t\c\h\z\g\q\o\t\9\u\x\t\4\a\q\0\i\2\0\p\s\4\5\t\o\v\3\q\m\s\h\y\1\c\9\k\e\l\z\2\u\0\m\i\g\8\o\z\4\x\9\z\l\w\k\l\l\m\4\k\8\3\u\a\r\p\n\3\e\7\4\8\c\k\6\l\3\o\m\g\n\h\v\p\9\z\w\w\9\a\j\k\v\1\c\u\v\m\h\8\i\j\4\a\t\3\l\v\m\i\q\4\6\x\b\5\x\n\z\n\1\k\u\1\1\h\b\p\s\9\x\a\t\4\t\b\e\1\s\7\m\w\f\m\5\o\i\8\w\e\7\d\v\7\7\p\5\v\i\j\a\0\2\x\4\2\o\j\6\u\n\p\c\2\c\u\1\8\5\6\q\s\6\c\2\w\y\0\9\y\5\d\7\r\u\0\h\y\g\n\u\i\b\x\9\0\v\s\7\e\6\2\n\6\i\0\7\i\u\2\h\5\v\5\s\5\z\4\0\f\d\n\9\y\2\z\j\1\e\h\8\e\o\t\x\a\m\8\y\d\3\e\0\o\4\0\m\6\h\q\g\u\w\e\g\z\o\p\m\u\u\8\4\f\c\b\6\u\6\4\w\a\d\1\q\5\m\9\3\s\c\d\h\i\7\o\r\3\z\x\z\2\e\x\s\m\c\g\3\7\j\y\8\t\v\5\i\p\4\v\z\i\i\6\m\w\x\c\i\6\l\j\i\p\g\c\b\c\d\f\i\5\o\m\f\u\o\l\y\a\l\1\v\z\r\k\d\7\5\e\b\q\s\w\d\t\e\j\w\j\8\o\t\l\t\y\8\q\b\i\i\v\n\1\l\h\u\b\o\q\a\6\z\n\8\o\y\c\k\7\2\q\d\e\p\c\y\l\q\k\h\2\q\d\l\u\h\x\n\9\j\c\2\4\a\j\s\x\n\j\a\n\c\v\z\h\h\j\0\k\i\7\x\q\e\d\b\p\d\j\s\e\m\l\h\i\r\p\6\y\j\z\q\q\s\s\f\d\g\3\6\l\f\h\z\a\k\l\k\s\j\h\5\v\t\v\r\s\v\j\4\e\g\8\u\3\2\z\0\t\6\u\f\5\s\6\b\n\8\f\w\3\i\4\w\9\z\k\c\5\8\v\5\k\e\8\8\o\x\z\6\5\s\l\1\f\8\p\j\2\5\w\h\g\r\l\5\8\p\2\n\m\d\z\r\6\4\1\b\s\k\g\0\u\h\v\u\3\0\5\8\e\6\v\b\x\m\8\6\4\b\d\i\1\d\6\o\5\o\3\x\7\5\l\k\9\k\q\t\9\j\j\s\h\y\y\0\1\u\d\7\y\n\g\2\8\7\e\5\s\m\l\a\9\8\f\4\3\j\1\w\p\1\i\a\e\g\0\9\v\z\x\7\i\9\c\v\n\s\9\p\r\t\h\i\i\f\x\e\6\b\l\6\4\f\i\v\1\8\m\0\8\h\7\2\m\4\d\q\a\c\h\r\j\m\u\g\2\j\t\v\0\g\2\y\3\n\9\d\7\e\7\5\n\j\g\o\2\9\n\g\5\m\5\m\d\p\t\s\a\a\0\s\4\o\i\6\z\z\l\8\s\r\f\8\y\7\4\m\j\r\x\h\y\e\t\c\k\q\q\2\b\3\g\x\k\r\m\a\p\n\6\y\o\l\h\a\g\8\4\i\0\h\9\d\h\i\s\2\d\8\e\e\0\o\w\0\o\e\g\m\t\j\9\m\x\a\l\v\s\1\j\1\c\n\w\7\k\0\j\u\l\x\q\w\g\u\w\r\w\1\z\0\v\m\r\o\2\w\4\e\f\z\j\q\b\c\o\l\c\o\9\3\7\9\o\n\0\z\u\5\7\5\x\c\q\e\b\m\b\c\6\q\h\k\9\k\4\2\r\c\3\a\g\w\l\f\i\g\f\1\o\w\4\v\h\9\d\y\b\i\i\v\d\y\g\4\e\f\y\m\i\x\z\t\f\i\s\6\3\m\9\a\h\3\x\i\c\7\j\2\1\5\k\9\m\0\z\l\8\s\u\p\z\3\w\c\i\k\z\q\3\c\6\c\e\9\g\n\z\d\s\s\1\q\x\f\p\k\4\o\l\0\6\9\b\f\g\o\2\m\7\f\2\e\i\a\7\f\v\3\9\z\j\i\3\i\6\7\x\e\f\r\l\h\r\a\v\1\s\8\1\3\k\i\d\m\y\x\2\g\d\4\9\k\g\t\x\m\4\6\q\m\0\y\d\k\j\j\3\n\e\s\n\l\g\g\3\j\6\d\u\n\s\8\j\o\o\z\m\r\e\y\o\e\d\o\n\3\b\b\j\m\n\y\n\m\4\f\w\7\t\1\2\e\7\h\o\w\l\u\e\b\j\i\j\l\9\r\q\r\l\u\i\u\5\t\r\b\m\z\z\s\b\y\y\j\9\b\d\g\o\r\c\5\h\h\u\5\6\p\7\f\f\i\9\o\r\l\7\n\d\t\b\a\w\b\7\8\6\c\y\x\7\j\a\7\p\k\m\v\w\u\e\n\1\g\n\p\a\9\4\t\y\9\3\8\z\f\b\n\4\d\a\1\t\h\u\x\q\0\e\s\o\f\c\2\i\b\8\2\z\c\n\o\z\o\2\9\o\a\p\r\4\v\s\8\6\r\2\4\u\n\k\x\e\v\v\4\a\g\a\b\z\i\k\z\2\y\x\d\x\b\9\2\7\z\v\v\5\r\u\o\e\y\c\9\x\7\9\e\7\t\d\8\a\z\7\s\u\p\m\v\4\8\1\p\b\a\m\b\5\x\m\h\u\0\a\e\a\g\h\s\r\l\3\m\b\8\z\k\u\f\k\k\z\b\1\2\s\j\2\x\p\4\t\q\6\a\8\h\s\a\0\5\l\y\f\t\i\0\9\x\j\u\y\5\q\b\p\h\n\w\t\x\n\j\3\b\q\b\t\t\2\k\g\g\2\6\4\a\8\s\d\6\c\x\n\f\l\s\i\y\a\1\u\1\d\h\5\z\2\c\e\2\w\f\9\2\5\r\g\3\a\w\i\z\7\6\2\c\a\f\4\0\y\0\8\k\r\0\o\t\j\n\g\w\x\g\q\3\h\6\3\8\r\0\p\x\n\7\m\b\2\z\u\4\k\t\o\b\p\6\8\m\9\p\6\n\m\1\g\s\7\y\c\o\1\h\4\s\y\0\s\2\8\g\u\k\3\v\y\9\b\r\p\m\i\c\b\9\7\p\9\9\z\8\a\j\q\x\p\1\q\p\9\6\h\f\y\m\z\u\t\3\9\h\1\1\n\0\k\1\9\0\j\2\2\0\4\4\j\b\t\v\0\e\3\e\q\u\4\9\m\s\7\c\e\i\5\5\e\2\1\3\4\g\4\j\m\r\q\l\d\h\w\7\n\8\8\7\3\3\i\y\y\8\w\m\7\s\h\f\a\r\e\r\y\q\c\3\f\u\c\r\j\e\4\5\l\m\l\r\n\b\5\q\h\f\h\h\c\n\6\c\n\n\l\p\i\i\i\9\c\y\2\8\d\d\b\e\a\y\5\h\m\j\w\j\k\n\r\0\2\m\n\g\p\2\0\0\1\e\b\i\o\d\6\v\a\a\y\2\c\e\r\v\0\z\3\i\3\m\n\i\5\u\j\6\9\4\p\f\g\a\g\d\t\8\d\y\m\s\m\7\1\t\n\v\v\p\d\k\y\4\x\i\k\l\b\g\h\5\3\7\2\7\1\6\t\1\8\z\q\0\q\3\n\g\f\f\l\4\x\6\d\s\a\6\z\h\r\3\t\8\8\d\9\1\a\d\b\8\o\1\z\c\7\3\1\2\f\z\p\h\e\3\i\0\4\8\u\a\f\x\n\r\7\5\7\d\5\5\t\t\5\u\c\4\f\i\e\f\6\7\2\l\l\j\l\n\8\h\q\8\d\5\v\m\v\t\e\6\0\v\t\v\8\j\8\o\i\d\m\q\x\a\y\o\y\p\9\7\7\2\c\l\3\g\6\9\s\c\v\v\n\z\u\2\x\2\h\w\k\i\h\w\e\4\n\6\s\f\r\6\a\f\g\t\u\u\j\3\d\e\z\k\f\j\n\s\8\3\l\1\u\9\w\j\7\o\f\6\f\s\z\h\9\g\e\3\e\0\5\s\0\8\e\d\4\w\h\e\2\h\r\z\1\q\l\y\y\2\q\j\4\l\8\j\t\n\0\7\a\h\e\r\3\s\p\9\l\p\v\m\z\1\5\i\h\6\3\z\l\2\5\j\g\i\n\f\k\u\c\2\a\w\a\k\v\u\1\1\x\w\t\j\n\d\7\f\9\k\g\e\f\8\4\4\j\6\d\4\f\m\5\r\h\b\w\8\x\z\n\9\c\g\7\q\q\y\w\h\g\1\j\k\0\h\u\p\0\y\c\c\z\3\c\m\m\y\g\b\y\n\n\6\2\x\s\h\k\o\c\b\k\9\x\w\n\o\w\p\v\m\b\y\1\r\s\2\f\5\n\z\q\1\x\r\3\3\k\1\k\5\e\y\3\l\c\d\a\k\b\5\x\a\e\l\r\6\7\v\d\e\y\d\i\r\7\e\k\s\t\h\8\g\b\f\u\o\q\3\h\w\g\o\p\x\l\i\5\t\i\4\d\i\8\0\p\i\c\y\x\v\f\3\s\c\t\2\w\l\s\r\v\w\d\3\w\3\k\6\c\x\1\0\0\q\j\u\p\q\u\q\6\w\b\d\4\7\7\b\x\k\9\x\l\9\o\a\q\c\i\4\f\g\8\e\x\0\a\1\a\p\d\8\v\8\9\p\u\3\0\4\h\r\c\v\2\i\u\j\0\t\7\j\d\s\q\8\y\q\5\4\j\3\6\l\8\7\v\w\b\3\d\w\4\n\p\b\y\u\h\v\0\t\p\g\1\x\q\7\4\p\7\4\3\9\j\3\5\b\w\d\5\0\o\v\d\d\j\b\5\6\u\k\t\9\7\t\u\y\7\b\0\q\o\x\g\5\o\g\l\3\5\b\s\c\p\q\l\8\k\b\o\t\1\p\h\5\e\r\u\1\1\w\b\e\0\u\5\w\g\y\m\k\a\7\d\g\q\n\p\8\z\6\i\7\7\7\u\n\n\p\r\p\b\i\z\o\v\q\6\c\5\c\d\c\w\8\u\g\w\1\k\r\s\h\8\h\h\6\w\3\g\e\6\y\1\p\w\s\4\9\m\2\h\7\4\1\2\y\c\k\i\4\g\f\t\o\n\y\6\6\r\w\9\j\m\b\4\a\j\5\y\c\6\o\r\z\n\y\1\4\e\q\o\w\e\d\g\f\c\8\f\9\p\n\g\4\9\z\4\x\s\p\h\p\i\7\h\w\u\z\d\n\6\k\q\m\v\c\5\v\4\2\u\5\o\o\3\k\q\u\0\z\d\s\g\a\6\q\j\b\f\z\d\j\j\a\9\t\z\n\s ]] 00:07:21.993 ************************************ 00:07:21.993 END TEST dd_rw_offset 00:07:21.993 ************************************ 00:07:21.993 00:07:21.993 real 0m1.622s 00:07:21.993 user 0m1.110s 00:07:21.993 sys 0m0.833s 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.993 14:45:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.993 [2024-11-22 14:45:36.629258] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:21.994 [2024-11-22 14:45:36.629374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:07:21.994 { 00:07:21.994 "subsystems": [ 00:07:21.994 { 00:07:21.994 "subsystem": "bdev", 00:07:21.994 "config": [ 00:07:21.994 { 00:07:21.994 "params": { 00:07:21.994 "trtype": "pcie", 00:07:21.994 "traddr": "0000:00:10.0", 00:07:21.994 "name": "Nvme0" 00:07:21.994 }, 00:07:21.994 "method": "bdev_nvme_attach_controller" 00:07:21.994 }, 00:07:21.994 { 00:07:21.994 "method": "bdev_wait_for_examine" 00:07:21.994 } 00:07:21.994 ] 00:07:21.994 } 00:07:21.994 ] 00:07:21.994 } 00:07:22.257 [2024-11-22 14:45:36.773271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.257 [2024-11-22 14:45:36.857549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.525 [2024-11-22 14:45:36.937749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.525  [2024-11-22T14:45:37.449Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:22.784 00:07:22.784 14:45:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.784 00:07:22.784 real 0m21.010s 00:07:22.784 user 0m14.938s 00:07:22.784 sys 0m8.974s 00:07:22.784 14:45:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.784 ************************************ 00:07:22.784 END TEST spdk_dd_basic_rw 00:07:22.784 ************************************ 00:07:22.784 14:45:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.784 14:45:37 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:22.784 14:45:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.784 14:45:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.784 14:45:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.784 ************************************ 00:07:22.784 START TEST spdk_dd_posix 00:07:22.784 ************************************ 00:07:22.784 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:23.043 * Looking for test storage... 00:07:23.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.043 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.044 --rc genhtml_branch_coverage=1 00:07:23.044 --rc genhtml_function_coverage=1 00:07:23.044 --rc genhtml_legend=1 00:07:23.044 --rc geninfo_all_blocks=1 00:07:23.044 --rc geninfo_unexecuted_blocks=1 00:07:23.044 00:07:23.044 ' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.044 --rc genhtml_branch_coverage=1 00:07:23.044 --rc genhtml_function_coverage=1 00:07:23.044 --rc genhtml_legend=1 00:07:23.044 --rc geninfo_all_blocks=1 00:07:23.044 --rc geninfo_unexecuted_blocks=1 00:07:23.044 00:07:23.044 ' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.044 --rc genhtml_branch_coverage=1 00:07:23.044 --rc genhtml_function_coverage=1 00:07:23.044 --rc genhtml_legend=1 00:07:23.044 --rc geninfo_all_blocks=1 00:07:23.044 --rc geninfo_unexecuted_blocks=1 00:07:23.044 00:07:23.044 ' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.044 --rc genhtml_branch_coverage=1 00:07:23.044 --rc genhtml_function_coverage=1 00:07:23.044 --rc genhtml_legend=1 00:07:23.044 --rc geninfo_all_blocks=1 00:07:23.044 --rc geninfo_unexecuted_blocks=1 00:07:23.044 00:07:23.044 ' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:23.044 * First test run, liburing in use 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.044 ************************************ 00:07:23.044 START TEST dd_flag_append 00:07:23.044 ************************************ 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=q7wlbo7uw1pgts865thx5p64kh3b4yul 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=vkkylob5pv5tn05otaz3z1lbtstxsevz 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s q7wlbo7uw1pgts865thx5p64kh3b4yul 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s vkkylob5pv5tn05otaz3z1lbtstxsevz 00:07:23.044 14:45:37 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:23.044 [2024-11-22 14:45:37.671158] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:23.044 [2024-11-22 14:45:37.671274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60303 ] 00:07:23.303 [2024-11-22 14:45:37.824898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.303 [2024-11-22 14:45:37.917796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.562 [2024-11-22 14:45:37.998863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.562  [2024-11-22T14:45:38.486Z] Copying: 32/32 [B] (average 31 kBps) 00:07:23.821 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ vkkylob5pv5tn05otaz3z1lbtstxsevzq7wlbo7uw1pgts865thx5p64kh3b4yul == \v\k\k\y\l\o\b\5\p\v\5\t\n\0\5\o\t\a\z\3\z\1\l\b\t\s\t\x\s\e\v\z\q\7\w\l\b\o\7\u\w\1\p\g\t\s\8\6\5\t\h\x\5\p\6\4\k\h\3\b\4\y\u\l ]] 00:07:23.821 00:07:23.821 real 0m0.714s 00:07:23.821 user 0m0.401s 00:07:23.821 sys 0m0.397s 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.821 ************************************ 00:07:23.821 END TEST dd_flag_append 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 ************************************ 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 ************************************ 00:07:23.821 START TEST dd_flag_directory 00:07:23.821 ************************************ 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.821 14:45:38 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.821 [2024-11-22 14:45:38.433597] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:23.821 [2024-11-22 14:45:38.433700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ] 00:07:24.080 [2024-11-22 14:45:38.585522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.080 [2024-11-22 14:45:38.654341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.080 [2024-11-22 14:45:38.735009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.338 [2024-11-22 14:45:38.784720] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.338 [2024-11-22 14:45:38.784797] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.339 [2024-11-22 14:45:38.784818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.339 [2024-11-22 14:45:38.956933] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.598 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.598 [2024-11-22 14:45:39.084003] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:24.598 [2024-11-22 14:45:39.084259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60352 ] 00:07:24.598 [2024-11-22 14:45:39.222421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.856 [2024-11-22 14:45:39.293113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.856 [2024-11-22 14:45:39.367069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.856 [2024-11-22 14:45:39.413252] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.856 [2024-11-22 14:45:39.413328] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.856 [2024-11-22 14:45:39.413350] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.116 [2024-11-22 14:45:39.587642] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.116 00:07:25.116 real 0m1.300s 00:07:25.116 user 0m0.730s 00:07:25.116 sys 0m0.358s 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:25.116 ************************************ 00:07:25.116 END TEST dd_flag_directory 00:07:25.116 ************************************ 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.116 ************************************ 00:07:25.116 START TEST dd_flag_nofollow 00:07:25.116 ************************************ 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.116 14:45:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.375 [2024-11-22 14:45:39.795178] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:25.375 [2024-11-22 14:45:39.795282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60381 ] 00:07:25.375 [2024-11-22 14:45:39.948294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.633 [2024-11-22 14:45:40.037640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.633 [2024-11-22 14:45:40.118547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.633 [2024-11-22 14:45:40.169085] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.633 [2024-11-22 14:45:40.169162] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.633 [2024-11-22 14:45:40.169188] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.892 [2024-11-22 14:45:40.345443] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.892 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.893 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.893 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.893 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.893 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.893 14:45:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.893 [2024-11-22 14:45:40.490777] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:25.893 [2024-11-22 14:45:40.490895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:07:26.151 [2024-11-22 14:45:40.647621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.151 [2024-11-22 14:45:40.739958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.409 [2024-11-22 14:45:40.821038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.409 [2024-11-22 14:45:40.870449] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.409 [2024-11-22 14:45:40.870521] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.409 [2024-11-22 14:45:40.870548] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.409 [2024-11-22 14:45:41.044544] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:26.669 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.669 [2024-11-22 14:45:41.194339] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:26.669 [2024-11-22 14:45:41.194689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60405 ] 00:07:26.928 [2024-11-22 14:45:41.340710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.928 [2024-11-22 14:45:41.408740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.928 [2024-11-22 14:45:41.485046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.928  [2024-11-22T14:45:41.852Z] Copying: 512/512 [B] (average 500 kBps) 00:07:27.187 00:07:27.187 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ hwc5zzkfhzmtul2xwqa2xedmssfim03s3aksrkqms77v2e37rxodhxgoqriuuiaqcizio0gyxqbm3pjsc5bl3vpbrkjyzf1hdkc26fx3lnqbdifwxpi85ififlbhxi1z47tdx55eoagit8voz2rnecpx5kq33dtkgahm36skv0u590vbrnx9wgm6w6cb7da68t5pxrx7eztchpo7p51r3h4ti0nh8f50fdi0alf0cy7o6l625dsgr2eliy4gkmm9uqmrt1xdd1iqnopkc122jhqpj8k6w30s3m65l2t8ed2qmxnx9xe4ffzjnasn9mfhjeu5pbrgkkw4ttzfhziqi9blrocdrnnuodm0zfwew3sd1ktg8v3zxahw4rqmqqd2xu12we0gq8r5ljqtb5huqysmcghibrgniu5ezhaemvki6vovgd32c0h6z7ghl7e7q9ke4mtad3f8mya1rkza5qe00r5yl21inqzzfrs4abvk7mduk89g7q5a9z1jrke8 == \h\w\c\5\z\z\k\f\h\z\m\t\u\l\2\x\w\q\a\2\x\e\d\m\s\s\f\i\m\0\3\s\3\a\k\s\r\k\q\m\s\7\7\v\2\e\3\7\r\x\o\d\h\x\g\o\q\r\i\u\u\i\a\q\c\i\z\i\o\0\g\y\x\q\b\m\3\p\j\s\c\5\b\l\3\v\p\b\r\k\j\y\z\f\1\h\d\k\c\2\6\f\x\3\l\n\q\b\d\i\f\w\x\p\i\8\5\i\f\i\f\l\b\h\x\i\1\z\4\7\t\d\x\5\5\e\o\a\g\i\t\8\v\o\z\2\r\n\e\c\p\x\5\k\q\3\3\d\t\k\g\a\h\m\3\6\s\k\v\0\u\5\9\0\v\b\r\n\x\9\w\g\m\6\w\6\c\b\7\d\a\6\8\t\5\p\x\r\x\7\e\z\t\c\h\p\o\7\p\5\1\r\3\h\4\t\i\0\n\h\8\f\5\0\f\d\i\0\a\l\f\0\c\y\7\o\6\l\6\2\5\d\s\g\r\2\e\l\i\y\4\g\k\m\m\9\u\q\m\r\t\1\x\d\d\1\i\q\n\o\p\k\c\1\2\2\j\h\q\p\j\8\k\6\w\3\0\s\3\m\6\5\l\2\t\8\e\d\2\q\m\x\n\x\9\x\e\4\f\f\z\j\n\a\s\n\9\m\f\h\j\e\u\5\p\b\r\g\k\k\w\4\t\t\z\f\h\z\i\q\i\9\b\l\r\o\c\d\r\n\n\u\o\d\m\0\z\f\w\e\w\3\s\d\1\k\t\g\8\v\3\z\x\a\h\w\4\r\q\m\q\q\d\2\x\u\1\2\w\e\0\g\q\8\r\5\l\j\q\t\b\5\h\u\q\y\s\m\c\g\h\i\b\r\g\n\i\u\5\e\z\h\a\e\m\v\k\i\6\v\o\v\g\d\3\2\c\0\h\6\z\7\g\h\l\7\e\7\q\9\k\e\4\m\t\a\d\3\f\8\m\y\a\1\r\k\z\a\5\q\e\0\0\r\5\y\l\2\1\i\n\q\z\z\f\r\s\4\a\b\v\k\7\m\d\u\k\8\9\g\7\q\5\a\9\z\1\j\r\k\e\8 ]] 00:07:27.187 00:07:27.187 real 0m2.086s 00:07:27.187 user 0m1.182s 00:07:27.187 sys 0m0.761s 00:07:27.187 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.187 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:27.187 ************************************ 00:07:27.187 END TEST dd_flag_nofollow 00:07:27.187 ************************************ 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:27.445 ************************************ 00:07:27.445 START TEST dd_flag_noatime 00:07:27.445 ************************************ 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732286741 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732286741 00:07:27.445 14:45:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:28.378 14:45:42 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.378 [2024-11-22 14:45:42.955184] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:28.378 [2024-11-22 14:45:42.955302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:07:28.637 [2024-11-22 14:45:43.109194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.637 [2024-11-22 14:45:43.197607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.637 [2024-11-22 14:45:43.278356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.895  [2024-11-22T14:45:43.820Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.155 00:07:29.155 14:45:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.155 14:45:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732286741 )) 00:07:29.155 14:45:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.155 14:45:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732286741 )) 00:07:29.155 14:45:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.155 [2024-11-22 14:45:43.674144] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:29.155 [2024-11-22 14:45:43.674288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60461 ] 00:07:29.413 [2024-11-22 14:45:43.821362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.413 [2024-11-22 14:45:43.902645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.413 [2024-11-22 14:45:43.977781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.413  [2024-11-22T14:45:44.336Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.671 00:07:29.671 14:45:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:29.671 14:45:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732286744 )) 00:07:29.671 00:07:29.671 real 0m2.432s 00:07:29.671 user 0m0.816s 00:07:29.671 sys 0m0.762s 00:07:29.671 ************************************ 00:07:29.671 END TEST dd_flag_noatime 00:07:29.671 ************************************ 00:07:29.671 14:45:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.671 14:45:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.930 ************************************ 00:07:29.930 START TEST dd_flags_misc 00:07:29.930 ************************************ 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.930 14:45:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:29.930 [2024-11-22 14:45:44.425984] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:29.930 [2024-11-22 14:45:44.426086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60495 ] 00:07:29.930 [2024-11-22 14:45:44.571058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.266 [2024-11-22 14:45:44.655655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.266 [2024-11-22 14:45:44.733330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.266  [2024-11-22T14:45:45.206Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.541 00:07:30.541 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5wvz01ft7vfiyhzum1aits1xx80nr9wrgpyakayd4v95kyacazv2k13nkm1kcue5rwxefc7q940ewfp5wghnspb4t5e3u1s88v9i6g85lp4woqafkmp3fab81235k5uyvicqioh3enq4ub9srs3j7o61uyp4mfpkpzb8pqfohx5rrjg00kw95ppw8gha2b3hbl6q9spun88nfqvxkj8u4up082b70xa6mcfy2orpd1ttu8jivbxc9lr1ne0bo7lqjmcytogwrcf88ot6zn8zftoxex0ilhsy29xr8rc98k82yeuipiw0zvt2h8m7dpiwuzp6srobvrv5b60zvfotndxuv9u341n3dt22br708kn9pne9lqvl4x1jk2h0h7bqmkp4rtg1muo4q5lttu4b9482h0mki7nl2i662dkqetlcdofqv38lvlwzn5qyfifnrntlyrmnzq4d5fvzxl4qu0v5rc2ijr7t1s2ie9pe1wqaz4qwqcy5k3bovp8pyc3v == \5\w\v\z\0\1\f\t\7\v\f\i\y\h\z\u\m\1\a\i\t\s\1\x\x\8\0\n\r\9\w\r\g\p\y\a\k\a\y\d\4\v\9\5\k\y\a\c\a\z\v\2\k\1\3\n\k\m\1\k\c\u\e\5\r\w\x\e\f\c\7\q\9\4\0\e\w\f\p\5\w\g\h\n\s\p\b\4\t\5\e\3\u\1\s\8\8\v\9\i\6\g\8\5\l\p\4\w\o\q\a\f\k\m\p\3\f\a\b\8\1\2\3\5\k\5\u\y\v\i\c\q\i\o\h\3\e\n\q\4\u\b\9\s\r\s\3\j\7\o\6\1\u\y\p\4\m\f\p\k\p\z\b\8\p\q\f\o\h\x\5\r\r\j\g\0\0\k\w\9\5\p\p\w\8\g\h\a\2\b\3\h\b\l\6\q\9\s\p\u\n\8\8\n\f\q\v\x\k\j\8\u\4\u\p\0\8\2\b\7\0\x\a\6\m\c\f\y\2\o\r\p\d\1\t\t\u\8\j\i\v\b\x\c\9\l\r\1\n\e\0\b\o\7\l\q\j\m\c\y\t\o\g\w\r\c\f\8\8\o\t\6\z\n\8\z\f\t\o\x\e\x\0\i\l\h\s\y\2\9\x\r\8\r\c\9\8\k\8\2\y\e\u\i\p\i\w\0\z\v\t\2\h\8\m\7\d\p\i\w\u\z\p\6\s\r\o\b\v\r\v\5\b\6\0\z\v\f\o\t\n\d\x\u\v\9\u\3\4\1\n\3\d\t\2\2\b\r\7\0\8\k\n\9\p\n\e\9\l\q\v\l\4\x\1\j\k\2\h\0\h\7\b\q\m\k\p\4\r\t\g\1\m\u\o\4\q\5\l\t\t\u\4\b\9\4\8\2\h\0\m\k\i\7\n\l\2\i\6\6\2\d\k\q\e\t\l\c\d\o\f\q\v\3\8\l\v\l\w\z\n\5\q\y\f\i\f\n\r\n\t\l\y\r\m\n\z\q\4\d\5\f\v\z\x\l\4\q\u\0\v\5\r\c\2\i\j\r\7\t\1\s\2\i\e\9\p\e\1\w\q\a\z\4\q\w\q\c\y\5\k\3\b\o\v\p\8\p\y\c\3\v ]] 00:07:30.541 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.541 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:30.541 [2024-11-22 14:45:45.114664] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:30.541 [2024-11-22 14:45:45.114758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:07:30.800 [2024-11-22 14:45:45.268723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.800 [2024-11-22 14:45:45.368801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.800 [2024-11-22 14:45:45.457238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.059  [2024-11-22T14:45:45.983Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.318 00:07:31.318 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5wvz01ft7vfiyhzum1aits1xx80nr9wrgpyakayd4v95kyacazv2k13nkm1kcue5rwxefc7q940ewfp5wghnspb4t5e3u1s88v9i6g85lp4woqafkmp3fab81235k5uyvicqioh3enq4ub9srs3j7o61uyp4mfpkpzb8pqfohx5rrjg00kw95ppw8gha2b3hbl6q9spun88nfqvxkj8u4up082b70xa6mcfy2orpd1ttu8jivbxc9lr1ne0bo7lqjmcytogwrcf88ot6zn8zftoxex0ilhsy29xr8rc98k82yeuipiw0zvt2h8m7dpiwuzp6srobvrv5b60zvfotndxuv9u341n3dt22br708kn9pne9lqvl4x1jk2h0h7bqmkp4rtg1muo4q5lttu4b9482h0mki7nl2i662dkqetlcdofqv38lvlwzn5qyfifnrntlyrmnzq4d5fvzxl4qu0v5rc2ijr7t1s2ie9pe1wqaz4qwqcy5k3bovp8pyc3v == \5\w\v\z\0\1\f\t\7\v\f\i\y\h\z\u\m\1\a\i\t\s\1\x\x\8\0\n\r\9\w\r\g\p\y\a\k\a\y\d\4\v\9\5\k\y\a\c\a\z\v\2\k\1\3\n\k\m\1\k\c\u\e\5\r\w\x\e\f\c\7\q\9\4\0\e\w\f\p\5\w\g\h\n\s\p\b\4\t\5\e\3\u\1\s\8\8\v\9\i\6\g\8\5\l\p\4\w\o\q\a\f\k\m\p\3\f\a\b\8\1\2\3\5\k\5\u\y\v\i\c\q\i\o\h\3\e\n\q\4\u\b\9\s\r\s\3\j\7\o\6\1\u\y\p\4\m\f\p\k\p\z\b\8\p\q\f\o\h\x\5\r\r\j\g\0\0\k\w\9\5\p\p\w\8\g\h\a\2\b\3\h\b\l\6\q\9\s\p\u\n\8\8\n\f\q\v\x\k\j\8\u\4\u\p\0\8\2\b\7\0\x\a\6\m\c\f\y\2\o\r\p\d\1\t\t\u\8\j\i\v\b\x\c\9\l\r\1\n\e\0\b\o\7\l\q\j\m\c\y\t\o\g\w\r\c\f\8\8\o\t\6\z\n\8\z\f\t\o\x\e\x\0\i\l\h\s\y\2\9\x\r\8\r\c\9\8\k\8\2\y\e\u\i\p\i\w\0\z\v\t\2\h\8\m\7\d\p\i\w\u\z\p\6\s\r\o\b\v\r\v\5\b\6\0\z\v\f\o\t\n\d\x\u\v\9\u\3\4\1\n\3\d\t\2\2\b\r\7\0\8\k\n\9\p\n\e\9\l\q\v\l\4\x\1\j\k\2\h\0\h\7\b\q\m\k\p\4\r\t\g\1\m\u\o\4\q\5\l\t\t\u\4\b\9\4\8\2\h\0\m\k\i\7\n\l\2\i\6\6\2\d\k\q\e\t\l\c\d\o\f\q\v\3\8\l\v\l\w\z\n\5\q\y\f\i\f\n\r\n\t\l\y\r\m\n\z\q\4\d\5\f\v\z\x\l\4\q\u\0\v\5\r\c\2\i\j\r\7\t\1\s\2\i\e\9\p\e\1\w\q\a\z\4\q\w\q\c\y\5\k\3\b\o\v\p\8\p\y\c\3\v ]] 00:07:31.318 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.318 14:45:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:31.318 [2024-11-22 14:45:45.837211] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:31.318 [2024-11-22 14:45:45.837297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60514 ] 00:07:31.577 [2024-11-22 14:45:45.985176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.577 [2024-11-22 14:45:46.054504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.577 [2024-11-22 14:45:46.134083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.577  [2024-11-22T14:45:46.502Z] Copying: 512/512 [B] (average 125 kBps) 00:07:31.837 00:07:31.837 14:45:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5wvz01ft7vfiyhzum1aits1xx80nr9wrgpyakayd4v95kyacazv2k13nkm1kcue5rwxefc7q940ewfp5wghnspb4t5e3u1s88v9i6g85lp4woqafkmp3fab81235k5uyvicqioh3enq4ub9srs3j7o61uyp4mfpkpzb8pqfohx5rrjg00kw95ppw8gha2b3hbl6q9spun88nfqvxkj8u4up082b70xa6mcfy2orpd1ttu8jivbxc9lr1ne0bo7lqjmcytogwrcf88ot6zn8zftoxex0ilhsy29xr8rc98k82yeuipiw0zvt2h8m7dpiwuzp6srobvrv5b60zvfotndxuv9u341n3dt22br708kn9pne9lqvl4x1jk2h0h7bqmkp4rtg1muo4q5lttu4b9482h0mki7nl2i662dkqetlcdofqv38lvlwzn5qyfifnrntlyrmnzq4d5fvzxl4qu0v5rc2ijr7t1s2ie9pe1wqaz4qwqcy5k3bovp8pyc3v == \5\w\v\z\0\1\f\t\7\v\f\i\y\h\z\u\m\1\a\i\t\s\1\x\x\8\0\n\r\9\w\r\g\p\y\a\k\a\y\d\4\v\9\5\k\y\a\c\a\z\v\2\k\1\3\n\k\m\1\k\c\u\e\5\r\w\x\e\f\c\7\q\9\4\0\e\w\f\p\5\w\g\h\n\s\p\b\4\t\5\e\3\u\1\s\8\8\v\9\i\6\g\8\5\l\p\4\w\o\q\a\f\k\m\p\3\f\a\b\8\1\2\3\5\k\5\u\y\v\i\c\q\i\o\h\3\e\n\q\4\u\b\9\s\r\s\3\j\7\o\6\1\u\y\p\4\m\f\p\k\p\z\b\8\p\q\f\o\h\x\5\r\r\j\g\0\0\k\w\9\5\p\p\w\8\g\h\a\2\b\3\h\b\l\6\q\9\s\p\u\n\8\8\n\f\q\v\x\k\j\8\u\4\u\p\0\8\2\b\7\0\x\a\6\m\c\f\y\2\o\r\p\d\1\t\t\u\8\j\i\v\b\x\c\9\l\r\1\n\e\0\b\o\7\l\q\j\m\c\y\t\o\g\w\r\c\f\8\8\o\t\6\z\n\8\z\f\t\o\x\e\x\0\i\l\h\s\y\2\9\x\r\8\r\c\9\8\k\8\2\y\e\u\i\p\i\w\0\z\v\t\2\h\8\m\7\d\p\i\w\u\z\p\6\s\r\o\b\v\r\v\5\b\6\0\z\v\f\o\t\n\d\x\u\v\9\u\3\4\1\n\3\d\t\2\2\b\r\7\0\8\k\n\9\p\n\e\9\l\q\v\l\4\x\1\j\k\2\h\0\h\7\b\q\m\k\p\4\r\t\g\1\m\u\o\4\q\5\l\t\t\u\4\b\9\4\8\2\h\0\m\k\i\7\n\l\2\i\6\6\2\d\k\q\e\t\l\c\d\o\f\q\v\3\8\l\v\l\w\z\n\5\q\y\f\i\f\n\r\n\t\l\y\r\m\n\z\q\4\d\5\f\v\z\x\l\4\q\u\0\v\5\r\c\2\i\j\r\7\t\1\s\2\i\e\9\p\e\1\w\q\a\z\4\q\w\q\c\y\5\k\3\b\o\v\p\8\p\y\c\3\v ]] 00:07:31.837 14:45:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.837 14:45:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:32.096 [2024-11-22 14:45:46.516517] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:32.096 [2024-11-22 14:45:46.516646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:07:32.096 [2024-11-22 14:45:46.673010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.096 [2024-11-22 14:45:46.755098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.354 [2024-11-22 14:45:46.832979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.354  [2024-11-22T14:45:47.278Z] Copying: 512/512 [B] (average 500 kBps) 00:07:32.613 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5wvz01ft7vfiyhzum1aits1xx80nr9wrgpyakayd4v95kyacazv2k13nkm1kcue5rwxefc7q940ewfp5wghnspb4t5e3u1s88v9i6g85lp4woqafkmp3fab81235k5uyvicqioh3enq4ub9srs3j7o61uyp4mfpkpzb8pqfohx5rrjg00kw95ppw8gha2b3hbl6q9spun88nfqvxkj8u4up082b70xa6mcfy2orpd1ttu8jivbxc9lr1ne0bo7lqjmcytogwrcf88ot6zn8zftoxex0ilhsy29xr8rc98k82yeuipiw0zvt2h8m7dpiwuzp6srobvrv5b60zvfotndxuv9u341n3dt22br708kn9pne9lqvl4x1jk2h0h7bqmkp4rtg1muo4q5lttu4b9482h0mki7nl2i662dkqetlcdofqv38lvlwzn5qyfifnrntlyrmnzq4d5fvzxl4qu0v5rc2ijr7t1s2ie9pe1wqaz4qwqcy5k3bovp8pyc3v == \5\w\v\z\0\1\f\t\7\v\f\i\y\h\z\u\m\1\a\i\t\s\1\x\x\8\0\n\r\9\w\r\g\p\y\a\k\a\y\d\4\v\9\5\k\y\a\c\a\z\v\2\k\1\3\n\k\m\1\k\c\u\e\5\r\w\x\e\f\c\7\q\9\4\0\e\w\f\p\5\w\g\h\n\s\p\b\4\t\5\e\3\u\1\s\8\8\v\9\i\6\g\8\5\l\p\4\w\o\q\a\f\k\m\p\3\f\a\b\8\1\2\3\5\k\5\u\y\v\i\c\q\i\o\h\3\e\n\q\4\u\b\9\s\r\s\3\j\7\o\6\1\u\y\p\4\m\f\p\k\p\z\b\8\p\q\f\o\h\x\5\r\r\j\g\0\0\k\w\9\5\p\p\w\8\g\h\a\2\b\3\h\b\l\6\q\9\s\p\u\n\8\8\n\f\q\v\x\k\j\8\u\4\u\p\0\8\2\b\7\0\x\a\6\m\c\f\y\2\o\r\p\d\1\t\t\u\8\j\i\v\b\x\c\9\l\r\1\n\e\0\b\o\7\l\q\j\m\c\y\t\o\g\w\r\c\f\8\8\o\t\6\z\n\8\z\f\t\o\x\e\x\0\i\l\h\s\y\2\9\x\r\8\r\c\9\8\k\8\2\y\e\u\i\p\i\w\0\z\v\t\2\h\8\m\7\d\p\i\w\u\z\p\6\s\r\o\b\v\r\v\5\b\6\0\z\v\f\o\t\n\d\x\u\v\9\u\3\4\1\n\3\d\t\2\2\b\r\7\0\8\k\n\9\p\n\e\9\l\q\v\l\4\x\1\j\k\2\h\0\h\7\b\q\m\k\p\4\r\t\g\1\m\u\o\4\q\5\l\t\t\u\4\b\9\4\8\2\h\0\m\k\i\7\n\l\2\i\6\6\2\d\k\q\e\t\l\c\d\o\f\q\v\3\8\l\v\l\w\z\n\5\q\y\f\i\f\n\r\n\t\l\y\r\m\n\z\q\4\d\5\f\v\z\x\l\4\q\u\0\v\5\r\c\2\i\j\r\7\t\1\s\2\i\e\9\p\e\1\w\q\a\z\4\q\w\q\c\y\5\k\3\b\o\v\p\8\p\y\c\3\v ]] 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:32.613 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:32.613 [2024-11-22 14:45:47.218521] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:32.613 [2024-11-22 14:45:47.218826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:07:32.871 [2024-11-22 14:45:47.370628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.871 [2024-11-22 14:45:47.453535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.129 [2024-11-22 14:45:47.541037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.129  [2024-11-22T14:45:48.053Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.388 00:07:33.388 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9l6hruxtwjv4rw84qf1ve2ecu821zv3djmdmclxjwxyah2jh8a70f1x7xzvntp4otgqpl6933sx4i6d8deqd54gu61xihob2u495a0h6gfs8694r236m0nnposl1q2671ltepik94u34eqq3c6tal3kuta2s6dz39rr7sv91de8u2mxtjwatbu1yzz459ktjs5m92ifqmv9dijrm33hfao9x20ajczl7gav5yo22e1yej4d1x6m1j5z21405y6xmqdi1f73jnfx0oa6pu7auompkdop0kc954vo2zlh244tr1k70e1ty0nv4scmftimxv5repgx7fvzzqah0ih5q1otlffszozyfnvt4709p877e16xur917enskbl81jglwlpfhgy6whj8y1vrqjggj1qg4fvjicqkz6ubikdhn7bk6zx195damt6tdh3p11jxtnogophpepdwtwuu52n0uv4ixp0v3tempx9l4oui4zb9ozcu8o7q6whedk4u87q1n == \9\l\6\h\r\u\x\t\w\j\v\4\r\w\8\4\q\f\1\v\e\2\e\c\u\8\2\1\z\v\3\d\j\m\d\m\c\l\x\j\w\x\y\a\h\2\j\h\8\a\7\0\f\1\x\7\x\z\v\n\t\p\4\o\t\g\q\p\l\6\9\3\3\s\x\4\i\6\d\8\d\e\q\d\5\4\g\u\6\1\x\i\h\o\b\2\u\4\9\5\a\0\h\6\g\f\s\8\6\9\4\r\2\3\6\m\0\n\n\p\o\s\l\1\q\2\6\7\1\l\t\e\p\i\k\9\4\u\3\4\e\q\q\3\c\6\t\a\l\3\k\u\t\a\2\s\6\d\z\3\9\r\r\7\s\v\9\1\d\e\8\u\2\m\x\t\j\w\a\t\b\u\1\y\z\z\4\5\9\k\t\j\s\5\m\9\2\i\f\q\m\v\9\d\i\j\r\m\3\3\h\f\a\o\9\x\2\0\a\j\c\z\l\7\g\a\v\5\y\o\2\2\e\1\y\e\j\4\d\1\x\6\m\1\j\5\z\2\1\4\0\5\y\6\x\m\q\d\i\1\f\7\3\j\n\f\x\0\o\a\6\p\u\7\a\u\o\m\p\k\d\o\p\0\k\c\9\5\4\v\o\2\z\l\h\2\4\4\t\r\1\k\7\0\e\1\t\y\0\n\v\4\s\c\m\f\t\i\m\x\v\5\r\e\p\g\x\7\f\v\z\z\q\a\h\0\i\h\5\q\1\o\t\l\f\f\s\z\o\z\y\f\n\v\t\4\7\0\9\p\8\7\7\e\1\6\x\u\r\9\1\7\e\n\s\k\b\l\8\1\j\g\l\w\l\p\f\h\g\y\6\w\h\j\8\y\1\v\r\q\j\g\g\j\1\q\g\4\f\v\j\i\c\q\k\z\6\u\b\i\k\d\h\n\7\b\k\6\z\x\1\9\5\d\a\m\t\6\t\d\h\3\p\1\1\j\x\t\n\o\g\o\p\h\p\e\p\d\w\t\w\u\u\5\2\n\0\u\v\4\i\x\p\0\v\3\t\e\m\p\x\9\l\4\o\u\i\4\z\b\9\o\z\c\u\8\o\7\q\6\w\h\e\d\k\4\u\8\7\q\1\n ]] 00:07:33.388 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.388 14:45:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:33.388 [2024-11-22 14:45:47.932052] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:33.388 [2024-11-22 14:45:47.932181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60550 ] 00:07:33.647 [2024-11-22 14:45:48.088903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.647 [2024-11-22 14:45:48.183357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.647 [2024-11-22 14:45:48.274993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.905  [2024-11-22T14:45:48.827Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.162 00:07:34.162 14:45:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9l6hruxtwjv4rw84qf1ve2ecu821zv3djmdmclxjwxyah2jh8a70f1x7xzvntp4otgqpl6933sx4i6d8deqd54gu61xihob2u495a0h6gfs8694r236m0nnposl1q2671ltepik94u34eqq3c6tal3kuta2s6dz39rr7sv91de8u2mxtjwatbu1yzz459ktjs5m92ifqmv9dijrm33hfao9x20ajczl7gav5yo22e1yej4d1x6m1j5z21405y6xmqdi1f73jnfx0oa6pu7auompkdop0kc954vo2zlh244tr1k70e1ty0nv4scmftimxv5repgx7fvzzqah0ih5q1otlffszozyfnvt4709p877e16xur917enskbl81jglwlpfhgy6whj8y1vrqjggj1qg4fvjicqkz6ubikdhn7bk6zx195damt6tdh3p11jxtnogophpepdwtwuu52n0uv4ixp0v3tempx9l4oui4zb9ozcu8o7q6whedk4u87q1n == \9\l\6\h\r\u\x\t\w\j\v\4\r\w\8\4\q\f\1\v\e\2\e\c\u\8\2\1\z\v\3\d\j\m\d\m\c\l\x\j\w\x\y\a\h\2\j\h\8\a\7\0\f\1\x\7\x\z\v\n\t\p\4\o\t\g\q\p\l\6\9\3\3\s\x\4\i\6\d\8\d\e\q\d\5\4\g\u\6\1\x\i\h\o\b\2\u\4\9\5\a\0\h\6\g\f\s\8\6\9\4\r\2\3\6\m\0\n\n\p\o\s\l\1\q\2\6\7\1\l\t\e\p\i\k\9\4\u\3\4\e\q\q\3\c\6\t\a\l\3\k\u\t\a\2\s\6\d\z\3\9\r\r\7\s\v\9\1\d\e\8\u\2\m\x\t\j\w\a\t\b\u\1\y\z\z\4\5\9\k\t\j\s\5\m\9\2\i\f\q\m\v\9\d\i\j\r\m\3\3\h\f\a\o\9\x\2\0\a\j\c\z\l\7\g\a\v\5\y\o\2\2\e\1\y\e\j\4\d\1\x\6\m\1\j\5\z\2\1\4\0\5\y\6\x\m\q\d\i\1\f\7\3\j\n\f\x\0\o\a\6\p\u\7\a\u\o\m\p\k\d\o\p\0\k\c\9\5\4\v\o\2\z\l\h\2\4\4\t\r\1\k\7\0\e\1\t\y\0\n\v\4\s\c\m\f\t\i\m\x\v\5\r\e\p\g\x\7\f\v\z\z\q\a\h\0\i\h\5\q\1\o\t\l\f\f\s\z\o\z\y\f\n\v\t\4\7\0\9\p\8\7\7\e\1\6\x\u\r\9\1\7\e\n\s\k\b\l\8\1\j\g\l\w\l\p\f\h\g\y\6\w\h\j\8\y\1\v\r\q\j\g\g\j\1\q\g\4\f\v\j\i\c\q\k\z\6\u\b\i\k\d\h\n\7\b\k\6\z\x\1\9\5\d\a\m\t\6\t\d\h\3\p\1\1\j\x\t\n\o\g\o\p\h\p\e\p\d\w\t\w\u\u\5\2\n\0\u\v\4\i\x\p\0\v\3\t\e\m\p\x\9\l\4\o\u\i\4\z\b\9\o\z\c\u\8\o\7\q\6\w\h\e\d\k\4\u\8\7\q\1\n ]] 00:07:34.162 14:45:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.162 14:45:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:34.162 [2024-11-22 14:45:48.714595] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:34.162 [2024-11-22 14:45:48.715079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60563 ] 00:07:34.420 [2024-11-22 14:45:48.866448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.420 [2024-11-22 14:45:48.948778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.420 [2024-11-22 14:45:49.030655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.677  [2024-11-22T14:45:49.601Z] Copying: 512/512 [B] (average 166 kBps) 00:07:34.936 00:07:34.936 14:45:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9l6hruxtwjv4rw84qf1ve2ecu821zv3djmdmclxjwxyah2jh8a70f1x7xzvntp4otgqpl6933sx4i6d8deqd54gu61xihob2u495a0h6gfs8694r236m0nnposl1q2671ltepik94u34eqq3c6tal3kuta2s6dz39rr7sv91de8u2mxtjwatbu1yzz459ktjs5m92ifqmv9dijrm33hfao9x20ajczl7gav5yo22e1yej4d1x6m1j5z21405y6xmqdi1f73jnfx0oa6pu7auompkdop0kc954vo2zlh244tr1k70e1ty0nv4scmftimxv5repgx7fvzzqah0ih5q1otlffszozyfnvt4709p877e16xur917enskbl81jglwlpfhgy6whj8y1vrqjggj1qg4fvjicqkz6ubikdhn7bk6zx195damt6tdh3p11jxtnogophpepdwtwuu52n0uv4ixp0v3tempx9l4oui4zb9ozcu8o7q6whedk4u87q1n == \9\l\6\h\r\u\x\t\w\j\v\4\r\w\8\4\q\f\1\v\e\2\e\c\u\8\2\1\z\v\3\d\j\m\d\m\c\l\x\j\w\x\y\a\h\2\j\h\8\a\7\0\f\1\x\7\x\z\v\n\t\p\4\o\t\g\q\p\l\6\9\3\3\s\x\4\i\6\d\8\d\e\q\d\5\4\g\u\6\1\x\i\h\o\b\2\u\4\9\5\a\0\h\6\g\f\s\8\6\9\4\r\2\3\6\m\0\n\n\p\o\s\l\1\q\2\6\7\1\l\t\e\p\i\k\9\4\u\3\4\e\q\q\3\c\6\t\a\l\3\k\u\t\a\2\s\6\d\z\3\9\r\r\7\s\v\9\1\d\e\8\u\2\m\x\t\j\w\a\t\b\u\1\y\z\z\4\5\9\k\t\j\s\5\m\9\2\i\f\q\m\v\9\d\i\j\r\m\3\3\h\f\a\o\9\x\2\0\a\j\c\z\l\7\g\a\v\5\y\o\2\2\e\1\y\e\j\4\d\1\x\6\m\1\j\5\z\2\1\4\0\5\y\6\x\m\q\d\i\1\f\7\3\j\n\f\x\0\o\a\6\p\u\7\a\u\o\m\p\k\d\o\p\0\k\c\9\5\4\v\o\2\z\l\h\2\4\4\t\r\1\k\7\0\e\1\t\y\0\n\v\4\s\c\m\f\t\i\m\x\v\5\r\e\p\g\x\7\f\v\z\z\q\a\h\0\i\h\5\q\1\o\t\l\f\f\s\z\o\z\y\f\n\v\t\4\7\0\9\p\8\7\7\e\1\6\x\u\r\9\1\7\e\n\s\k\b\l\8\1\j\g\l\w\l\p\f\h\g\y\6\w\h\j\8\y\1\v\r\q\j\g\g\j\1\q\g\4\f\v\j\i\c\q\k\z\6\u\b\i\k\d\h\n\7\b\k\6\z\x\1\9\5\d\a\m\t\6\t\d\h\3\p\1\1\j\x\t\n\o\g\o\p\h\p\e\p\d\w\t\w\u\u\5\2\n\0\u\v\4\i\x\p\0\v\3\t\e\m\p\x\9\l\4\o\u\i\4\z\b\9\o\z\c\u\8\o\7\q\6\w\h\e\d\k\4\u\8\7\q\1\n ]] 00:07:34.936 14:45:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.936 14:45:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:34.936 [2024-11-22 14:45:49.404532] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:34.936 [2024-11-22 14:45:49.404616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60573 ] 00:07:34.936 [2024-11-22 14:45:49.549994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.195 [2024-11-22 14:45:49.621547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.195 [2024-11-22 14:45:49.695178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.195  [2024-11-22T14:45:50.119Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.454 00:07:35.454 ************************************ 00:07:35.454 END TEST dd_flags_misc 00:07:35.454 ************************************ 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9l6hruxtwjv4rw84qf1ve2ecu821zv3djmdmclxjwxyah2jh8a70f1x7xzvntp4otgqpl6933sx4i6d8deqd54gu61xihob2u495a0h6gfs8694r236m0nnposl1q2671ltepik94u34eqq3c6tal3kuta2s6dz39rr7sv91de8u2mxtjwatbu1yzz459ktjs5m92ifqmv9dijrm33hfao9x20ajczl7gav5yo22e1yej4d1x6m1j5z21405y6xmqdi1f73jnfx0oa6pu7auompkdop0kc954vo2zlh244tr1k70e1ty0nv4scmftimxv5repgx7fvzzqah0ih5q1otlffszozyfnvt4709p877e16xur917enskbl81jglwlpfhgy6whj8y1vrqjggj1qg4fvjicqkz6ubikdhn7bk6zx195damt6tdh3p11jxtnogophpepdwtwuu52n0uv4ixp0v3tempx9l4oui4zb9ozcu8o7q6whedk4u87q1n == \9\l\6\h\r\u\x\t\w\j\v\4\r\w\8\4\q\f\1\v\e\2\e\c\u\8\2\1\z\v\3\d\j\m\d\m\c\l\x\j\w\x\y\a\h\2\j\h\8\a\7\0\f\1\x\7\x\z\v\n\t\p\4\o\t\g\q\p\l\6\9\3\3\s\x\4\i\6\d\8\d\e\q\d\5\4\g\u\6\1\x\i\h\o\b\2\u\4\9\5\a\0\h\6\g\f\s\8\6\9\4\r\2\3\6\m\0\n\n\p\o\s\l\1\q\2\6\7\1\l\t\e\p\i\k\9\4\u\3\4\e\q\q\3\c\6\t\a\l\3\k\u\t\a\2\s\6\d\z\3\9\r\r\7\s\v\9\1\d\e\8\u\2\m\x\t\j\w\a\t\b\u\1\y\z\z\4\5\9\k\t\j\s\5\m\9\2\i\f\q\m\v\9\d\i\j\r\m\3\3\h\f\a\o\9\x\2\0\a\j\c\z\l\7\g\a\v\5\y\o\2\2\e\1\y\e\j\4\d\1\x\6\m\1\j\5\z\2\1\4\0\5\y\6\x\m\q\d\i\1\f\7\3\j\n\f\x\0\o\a\6\p\u\7\a\u\o\m\p\k\d\o\p\0\k\c\9\5\4\v\o\2\z\l\h\2\4\4\t\r\1\k\7\0\e\1\t\y\0\n\v\4\s\c\m\f\t\i\m\x\v\5\r\e\p\g\x\7\f\v\z\z\q\a\h\0\i\h\5\q\1\o\t\l\f\f\s\z\o\z\y\f\n\v\t\4\7\0\9\p\8\7\7\e\1\6\x\u\r\9\1\7\e\n\s\k\b\l\8\1\j\g\l\w\l\p\f\h\g\y\6\w\h\j\8\y\1\v\r\q\j\g\g\j\1\q\g\4\f\v\j\i\c\q\k\z\6\u\b\i\k\d\h\n\7\b\k\6\z\x\1\9\5\d\a\m\t\6\t\d\h\3\p\1\1\j\x\t\n\o\g\o\p\h\p\e\p\d\w\t\w\u\u\5\2\n\0\u\v\4\i\x\p\0\v\3\t\e\m\p\x\9\l\4\o\u\i\4\z\b\9\o\z\c\u\8\o\7\q\6\w\h\e\d\k\4\u\8\7\q\1\n ]] 00:07:35.454 00:07:35.454 real 0m5.647s 00:07:35.454 user 0m3.221s 00:07:35.454 sys 0m3.125s 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:35.454 * Second test run, disabling liburing, forcing AIO 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.454 ************************************ 00:07:35.454 START TEST dd_flag_append_forced_aio 00:07:35.454 ************************************ 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=s5avaozlni0n5plp3r1xuhuc3hpw407c 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=9085woc9kg5zx16vswathkwlhgoe4ssq 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s s5avaozlni0n5plp3r1xuhuc3hpw407c 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 9085woc9kg5zx16vswathkwlhgoe4ssq 00:07:35.454 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:35.713 [2024-11-22 14:45:50.130308] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:35.713 [2024-11-22 14:45:50.130428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:07:35.713 [2024-11-22 14:45:50.276661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.713 [2024-11-22 14:45:50.340818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.972 [2024-11-22 14:45:50.413652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.972  [2024-11-22T14:45:50.895Z] Copying: 32/32 [B] (average 31 kBps) 00:07:36.230 00:07:36.230 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 9085woc9kg5zx16vswathkwlhgoe4ssqs5avaozlni0n5plp3r1xuhuc3hpw407c == \9\0\8\5\w\o\c\9\k\g\5\z\x\1\6\v\s\w\a\t\h\k\w\l\h\g\o\e\4\s\s\q\s\5\a\v\a\o\z\l\n\i\0\n\5\p\l\p\3\r\1\x\u\h\u\c\3\h\p\w\4\0\7\c ]] 00:07:36.230 00:07:36.230 real 0m0.709s 00:07:36.230 user 0m0.405s 00:07:36.230 sys 0m0.177s 00:07:36.230 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.230 ************************************ 00:07:36.230 END TEST dd_flag_append_forced_aio 00:07:36.230 ************************************ 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:36.231 ************************************ 00:07:36.231 START TEST dd_flag_directory_forced_aio 00:07:36.231 ************************************ 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.231 14:45:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.231 [2024-11-22 14:45:50.885645] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:36.231 [2024-11-22 14:45:50.885974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60633 ] 00:07:36.489 [2024-11-22 14:45:51.033877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.489 [2024-11-22 14:45:51.117418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.746 [2024-11-22 14:45:51.199170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.746 [2024-11-22 14:45:51.252052] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.746 [2024-11-22 14:45:51.252122] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:36.746 [2024-11-22 14:45:51.252157] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.077 [2024-11-22 14:45:51.437126] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.077 14:45:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:37.077 [2024-11-22 14:45:51.586575] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:37.077 [2024-11-22 14:45:51.586691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:07:37.335 [2024-11-22 14:45:51.736466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.335 [2024-11-22 14:45:51.816737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.335 [2024-11-22 14:45:51.897019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.335 [2024-11-22 14:45:51.948549] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:37.335 [2024-11-22 14:45:51.948671] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:37.335 [2024-11-22 14:45:51.948694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.594 [2024-11-22 14:45:52.129009] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:37.594 ************************************ 00:07:37.594 END TEST dd_flag_directory_forced_aio 00:07:37.594 ************************************ 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.594 00:07:37.594 real 0m1.414s 00:07:37.594 user 0m0.823s 00:07:37.594 sys 0m0.378s 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.594 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.853 ************************************ 00:07:37.853 START TEST dd_flag_nofollow_forced_aio 00:07:37.853 ************************************ 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.853 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.853 [2024-11-22 14:45:52.365504] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:37.853 [2024-11-22 14:45:52.365604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60677 ] 00:07:38.112 [2024-11-22 14:45:52.516587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.112 [2024-11-22 14:45:52.595124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.112 [2024-11-22 14:45:52.674353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.112 [2024-11-22 14:45:52.724296] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:38.112 [2024-11-22 14:45:52.724368] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:38.112 [2024-11-22 14:45:52.724439] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.371 [2024-11-22 14:45:52.913906] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.371 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.371 14:45:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.371 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.371 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.371 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:38.630 [2024-11-22 14:45:53.065229] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:38.631 [2024-11-22 14:45:53.065345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:07:38.631 [2024-11-22 14:45:53.210001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.631 [2024-11-22 14:45:53.278665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.889 [2024-11-22 14:45:53.358735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.889 [2024-11-22 14:45:53.412941] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:38.889 [2024-11-22 14:45:53.413284] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:38.889 [2024-11-22 14:45:53.413315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.148 [2024-11-22 14:45:53.592966] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:39.148 14:45:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.148 [2024-11-22 14:45:53.734229] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:39.148 [2024-11-22 14:45:53.734342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:07:39.408 [2024-11-22 14:45:53.877520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.408 [2024-11-22 14:45:53.963463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.408 [2024-11-22 14:45:54.045634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.667  [2024-11-22T14:45:54.591Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.926 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ycht154mi68hirxzyt63st1dph9dm8708m1wsr1ablqdepup5qqaddnxg3ziwfe0vhsq4qelqo2v06m8g1vf7z4i4z5mopnzqu7w6u4a1fald28fcphy2htkidrb5fmxq61sgujlanbqbpoo17runfjkozstbepvkougf2ve2loqm70l2livr2oy9dgdg46d4x8cx85tkb9cco6wwa9y4qdtuk0vmzqu7sdq4nzo8zpi2f5ivaekpzopeib93brdzy9qq2mn2gd7ghf5pk0waifn3pnpefod0vcvjnzjczy4gg4qcbyd1ehkgrbv9tu2pmyuaqt3lo8bzzdox6tzuzbn9ofr5k7bf3ah6x6ea7hkvru850htou4ll6s5pq1w7bzqf6wngay1yqo6iixhl8bf60kkdbi2i3lbldtygnz3ucw82rpr47su29zl14vxcc5ikoyozqax9my0jklwtrh0du6xyeubflp1v1nxp01d22uxci6u8gl2o1caaqh8 == \y\c\h\t\1\5\4\m\i\6\8\h\i\r\x\z\y\t\6\3\s\t\1\d\p\h\9\d\m\8\7\0\8\m\1\w\s\r\1\a\b\l\q\d\e\p\u\p\5\q\q\a\d\d\n\x\g\3\z\i\w\f\e\0\v\h\s\q\4\q\e\l\q\o\2\v\0\6\m\8\g\1\v\f\7\z\4\i\4\z\5\m\o\p\n\z\q\u\7\w\6\u\4\a\1\f\a\l\d\2\8\f\c\p\h\y\2\h\t\k\i\d\r\b\5\f\m\x\q\6\1\s\g\u\j\l\a\n\b\q\b\p\o\o\1\7\r\u\n\f\j\k\o\z\s\t\b\e\p\v\k\o\u\g\f\2\v\e\2\l\o\q\m\7\0\l\2\l\i\v\r\2\o\y\9\d\g\d\g\4\6\d\4\x\8\c\x\8\5\t\k\b\9\c\c\o\6\w\w\a\9\y\4\q\d\t\u\k\0\v\m\z\q\u\7\s\d\q\4\n\z\o\8\z\p\i\2\f\5\i\v\a\e\k\p\z\o\p\e\i\b\9\3\b\r\d\z\y\9\q\q\2\m\n\2\g\d\7\g\h\f\5\p\k\0\w\a\i\f\n\3\p\n\p\e\f\o\d\0\v\c\v\j\n\z\j\c\z\y\4\g\g\4\q\c\b\y\d\1\e\h\k\g\r\b\v\9\t\u\2\p\m\y\u\a\q\t\3\l\o\8\b\z\z\d\o\x\6\t\z\u\z\b\n\9\o\f\r\5\k\7\b\f\3\a\h\6\x\6\e\a\7\h\k\v\r\u\8\5\0\h\t\o\u\4\l\l\6\s\5\p\q\1\w\7\b\z\q\f\6\w\n\g\a\y\1\y\q\o\6\i\i\x\h\l\8\b\f\6\0\k\k\d\b\i\2\i\3\l\b\l\d\t\y\g\n\z\3\u\c\w\8\2\r\p\r\4\7\s\u\2\9\z\l\1\4\v\x\c\c\5\i\k\o\y\o\z\q\a\x\9\m\y\0\j\k\l\w\t\r\h\0\d\u\6\x\y\e\u\b\f\l\p\1\v\1\n\x\p\0\1\d\2\2\u\x\c\i\6\u\8\g\l\2\o\1\c\a\a\q\h\8 ]] 00:07:39.927 ************************************ 00:07:39.927 00:07:39.927 real 0m2.139s 00:07:39.927 user 0m1.221s 00:07:39.927 sys 0m0.584s 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 END TEST dd_flag_nofollow_forced_aio 00:07:39.927 ************************************ 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 ************************************ 00:07:39.927 START TEST dd_flag_noatime_forced_aio 00:07:39.927 ************************************ 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732286754 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732286754 00:07:39.927 14:45:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:40.863 14:45:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.121 [2024-11-22 14:45:55.578479] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:41.121 [2024-11-22 14:45:55.578615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60742 ] 00:07:41.121 [2024-11-22 14:45:55.728672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.380 [2024-11-22 14:45:55.816832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.380 [2024-11-22 14:45:55.900698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.380  [2024-11-22T14:45:56.303Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.638 00:07:41.638 14:45:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.638 14:45:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732286754 )) 00:07:41.638 14:45:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.638 14:45:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732286754 )) 00:07:41.638 14:45:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.897 [2024-11-22 14:45:56.342019] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:41.897 [2024-11-22 14:45:56.342490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60759 ] 00:07:41.897 [2024-11-22 14:45:56.490706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.156 [2024-11-22 14:45:56.579175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.156 [2024-11-22 14:45:56.662317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.156  [2024-11-22T14:45:57.080Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.415 00:07:42.415 14:45:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.415 ************************************ 00:07:42.415 END TEST dd_flag_noatime_forced_aio 00:07:42.415 ************************************ 00:07:42.415 14:45:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732286756 )) 00:07:42.415 00:07:42.415 real 0m2.553s 00:07:42.415 user 0m0.867s 00:07:42.415 sys 0m0.434s 00:07:42.415 14:45:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.415 14:45:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:42.674 ************************************ 00:07:42.674 START TEST dd_flags_misc_forced_aio 00:07:42.674 ************************************ 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.674 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:42.674 [2024-11-22 14:45:57.161929] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:42.674 [2024-11-22 14:45:57.162384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60786 ] 00:07:42.674 [2024-11-22 14:45:57.313642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.934 [2024-11-22 14:45:57.407525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.934 [2024-11-22 14:45:57.492943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.934  [2024-11-22T14:45:58.168Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.503 00:07:43.503 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ncssf8sxu5oteqeh0vhy8aoiaweks0skswteo21zf4sboi661rjoatgal39i3d251o1zq71nkrqvl5kg5giormclsfbewmh3svou72egjqz4v4xef6wgjz5v6wm4cojwt6vo1ybzu6k70uph6zzhgcjcswsd1jataqrya7ugpfvhzsqmucn6rize21rinjcoto6o05w6bs49qo2mzfrpgmj6kg8htbrrdoz29772u2d6eetfn28js4ig73fykxnlz1raeenup46jkubmd1n6hw1vfkcn24ci2jwmlo5qhzn1557uh598buqk2l1dcszadgwem1o0ekjgv7xls9tpmq56gej6h9sf58duxdsb9n6t16q5tjhod0aopfercasiv3qla03g4d0c40rbxw903ljfn29wnxxigxznycx2fubvt32swupl50oel0q2t2vzi9ge1w7lu9195eaaw7xkl0eifiubfw44riv5chfukvc9t451atsp5oo83lvstjp9 == \n\c\s\s\f\8\s\x\u\5\o\t\e\q\e\h\0\v\h\y\8\a\o\i\a\w\e\k\s\0\s\k\s\w\t\e\o\2\1\z\f\4\s\b\o\i\6\6\1\r\j\o\a\t\g\a\l\3\9\i\3\d\2\5\1\o\1\z\q\7\1\n\k\r\q\v\l\5\k\g\5\g\i\o\r\m\c\l\s\f\b\e\w\m\h\3\s\v\o\u\7\2\e\g\j\q\z\4\v\4\x\e\f\6\w\g\j\z\5\v\6\w\m\4\c\o\j\w\t\6\v\o\1\y\b\z\u\6\k\7\0\u\p\h\6\z\z\h\g\c\j\c\s\w\s\d\1\j\a\t\a\q\r\y\a\7\u\g\p\f\v\h\z\s\q\m\u\c\n\6\r\i\z\e\2\1\r\i\n\j\c\o\t\o\6\o\0\5\w\6\b\s\4\9\q\o\2\m\z\f\r\p\g\m\j\6\k\g\8\h\t\b\r\r\d\o\z\2\9\7\7\2\u\2\d\6\e\e\t\f\n\2\8\j\s\4\i\g\7\3\f\y\k\x\n\l\z\1\r\a\e\e\n\u\p\4\6\j\k\u\b\m\d\1\n\6\h\w\1\v\f\k\c\n\2\4\c\i\2\j\w\m\l\o\5\q\h\z\n\1\5\5\7\u\h\5\9\8\b\u\q\k\2\l\1\d\c\s\z\a\d\g\w\e\m\1\o\0\e\k\j\g\v\7\x\l\s\9\t\p\m\q\5\6\g\e\j\6\h\9\s\f\5\8\d\u\x\d\s\b\9\n\6\t\1\6\q\5\t\j\h\o\d\0\a\o\p\f\e\r\c\a\s\i\v\3\q\l\a\0\3\g\4\d\0\c\4\0\r\b\x\w\9\0\3\l\j\f\n\2\9\w\n\x\x\i\g\x\z\n\y\c\x\2\f\u\b\v\t\3\2\s\w\u\p\l\5\0\o\e\l\0\q\2\t\2\v\z\i\9\g\e\1\w\7\l\u\9\1\9\5\e\a\a\w\7\x\k\l\0\e\i\f\i\u\b\f\w\4\4\r\i\v\5\c\h\f\u\k\v\c\9\t\4\5\1\a\t\s\p\5\o\o\8\3\l\v\s\t\j\p\9 ]] 00:07:43.503 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.503 14:45:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:43.503 [2024-11-22 14:45:57.941795] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:43.503 [2024-11-22 14:45:57.941952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60798 ] 00:07:43.503 [2024-11-22 14:45:58.093480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.763 [2024-11-22 14:45:58.189520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.763 [2024-11-22 14:45:58.276708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.763  [2024-11-22T14:45:58.690Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.025 00:07:44.025 14:45:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ncssf8sxu5oteqeh0vhy8aoiaweks0skswteo21zf4sboi661rjoatgal39i3d251o1zq71nkrqvl5kg5giormclsfbewmh3svou72egjqz4v4xef6wgjz5v6wm4cojwt6vo1ybzu6k70uph6zzhgcjcswsd1jataqrya7ugpfvhzsqmucn6rize21rinjcoto6o05w6bs49qo2mzfrpgmj6kg8htbrrdoz29772u2d6eetfn28js4ig73fykxnlz1raeenup46jkubmd1n6hw1vfkcn24ci2jwmlo5qhzn1557uh598buqk2l1dcszadgwem1o0ekjgv7xls9tpmq56gej6h9sf58duxdsb9n6t16q5tjhod0aopfercasiv3qla03g4d0c40rbxw903ljfn29wnxxigxznycx2fubvt32swupl50oel0q2t2vzi9ge1w7lu9195eaaw7xkl0eifiubfw44riv5chfukvc9t451atsp5oo83lvstjp9 == \n\c\s\s\f\8\s\x\u\5\o\t\e\q\e\h\0\v\h\y\8\a\o\i\a\w\e\k\s\0\s\k\s\w\t\e\o\2\1\z\f\4\s\b\o\i\6\6\1\r\j\o\a\t\g\a\l\3\9\i\3\d\2\5\1\o\1\z\q\7\1\n\k\r\q\v\l\5\k\g\5\g\i\o\r\m\c\l\s\f\b\e\w\m\h\3\s\v\o\u\7\2\e\g\j\q\z\4\v\4\x\e\f\6\w\g\j\z\5\v\6\w\m\4\c\o\j\w\t\6\v\o\1\y\b\z\u\6\k\7\0\u\p\h\6\z\z\h\g\c\j\c\s\w\s\d\1\j\a\t\a\q\r\y\a\7\u\g\p\f\v\h\z\s\q\m\u\c\n\6\r\i\z\e\2\1\r\i\n\j\c\o\t\o\6\o\0\5\w\6\b\s\4\9\q\o\2\m\z\f\r\p\g\m\j\6\k\g\8\h\t\b\r\r\d\o\z\2\9\7\7\2\u\2\d\6\e\e\t\f\n\2\8\j\s\4\i\g\7\3\f\y\k\x\n\l\z\1\r\a\e\e\n\u\p\4\6\j\k\u\b\m\d\1\n\6\h\w\1\v\f\k\c\n\2\4\c\i\2\j\w\m\l\o\5\q\h\z\n\1\5\5\7\u\h\5\9\8\b\u\q\k\2\l\1\d\c\s\z\a\d\g\w\e\m\1\o\0\e\k\j\g\v\7\x\l\s\9\t\p\m\q\5\6\g\e\j\6\h\9\s\f\5\8\d\u\x\d\s\b\9\n\6\t\1\6\q\5\t\j\h\o\d\0\a\o\p\f\e\r\c\a\s\i\v\3\q\l\a\0\3\g\4\d\0\c\4\0\r\b\x\w\9\0\3\l\j\f\n\2\9\w\n\x\x\i\g\x\z\n\y\c\x\2\f\u\b\v\t\3\2\s\w\u\p\l\5\0\o\e\l\0\q\2\t\2\v\z\i\9\g\e\1\w\7\l\u\9\1\9\5\e\a\a\w\7\x\k\l\0\e\i\f\i\u\b\f\w\4\4\r\i\v\5\c\h\f\u\k\v\c\9\t\4\5\1\a\t\s\p\5\o\o\8\3\l\v\s\t\j\p\9 ]] 00:07:44.025 14:45:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.025 14:45:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:44.283 [2024-11-22 14:45:58.705667] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:44.283 [2024-11-22 14:45:58.705801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60806 ] 00:07:44.283 [2024-11-22 14:45:58.853098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.283 [2024-11-22 14:45:58.926046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.542 [2024-11-22 14:45:59.008099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.542  [2024-11-22T14:45:59.465Z] Copying: 512/512 [B] (average 83 kBps) 00:07:44.800 00:07:44.800 14:45:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ncssf8sxu5oteqeh0vhy8aoiaweks0skswteo21zf4sboi661rjoatgal39i3d251o1zq71nkrqvl5kg5giormclsfbewmh3svou72egjqz4v4xef6wgjz5v6wm4cojwt6vo1ybzu6k70uph6zzhgcjcswsd1jataqrya7ugpfvhzsqmucn6rize21rinjcoto6o05w6bs49qo2mzfrpgmj6kg8htbrrdoz29772u2d6eetfn28js4ig73fykxnlz1raeenup46jkubmd1n6hw1vfkcn24ci2jwmlo5qhzn1557uh598buqk2l1dcszadgwem1o0ekjgv7xls9tpmq56gej6h9sf58duxdsb9n6t16q5tjhod0aopfercasiv3qla03g4d0c40rbxw903ljfn29wnxxigxznycx2fubvt32swupl50oel0q2t2vzi9ge1w7lu9195eaaw7xkl0eifiubfw44riv5chfukvc9t451atsp5oo83lvstjp9 == \n\c\s\s\f\8\s\x\u\5\o\t\e\q\e\h\0\v\h\y\8\a\o\i\a\w\e\k\s\0\s\k\s\w\t\e\o\2\1\z\f\4\s\b\o\i\6\6\1\r\j\o\a\t\g\a\l\3\9\i\3\d\2\5\1\o\1\z\q\7\1\n\k\r\q\v\l\5\k\g\5\g\i\o\r\m\c\l\s\f\b\e\w\m\h\3\s\v\o\u\7\2\e\g\j\q\z\4\v\4\x\e\f\6\w\g\j\z\5\v\6\w\m\4\c\o\j\w\t\6\v\o\1\y\b\z\u\6\k\7\0\u\p\h\6\z\z\h\g\c\j\c\s\w\s\d\1\j\a\t\a\q\r\y\a\7\u\g\p\f\v\h\z\s\q\m\u\c\n\6\r\i\z\e\2\1\r\i\n\j\c\o\t\o\6\o\0\5\w\6\b\s\4\9\q\o\2\m\z\f\r\p\g\m\j\6\k\g\8\h\t\b\r\r\d\o\z\2\9\7\7\2\u\2\d\6\e\e\t\f\n\2\8\j\s\4\i\g\7\3\f\y\k\x\n\l\z\1\r\a\e\e\n\u\p\4\6\j\k\u\b\m\d\1\n\6\h\w\1\v\f\k\c\n\2\4\c\i\2\j\w\m\l\o\5\q\h\z\n\1\5\5\7\u\h\5\9\8\b\u\q\k\2\l\1\d\c\s\z\a\d\g\w\e\m\1\o\0\e\k\j\g\v\7\x\l\s\9\t\p\m\q\5\6\g\e\j\6\h\9\s\f\5\8\d\u\x\d\s\b\9\n\6\t\1\6\q\5\t\j\h\o\d\0\a\o\p\f\e\r\c\a\s\i\v\3\q\l\a\0\3\g\4\d\0\c\4\0\r\b\x\w\9\0\3\l\j\f\n\2\9\w\n\x\x\i\g\x\z\n\y\c\x\2\f\u\b\v\t\3\2\s\w\u\p\l\5\0\o\e\l\0\q\2\t\2\v\z\i\9\g\e\1\w\7\l\u\9\1\9\5\e\a\a\w\7\x\k\l\0\e\i\f\i\u\b\f\w\4\4\r\i\v\5\c\h\f\u\k\v\c\9\t\4\5\1\a\t\s\p\5\o\o\8\3\l\v\s\t\j\p\9 ]] 00:07:44.800 14:45:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.800 14:45:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:44.800 [2024-11-22 14:45:59.410733] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:44.800 [2024-11-22 14:45:59.410851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60819 ] 00:07:45.059 [2024-11-22 14:45:59.558315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.059 [2024-11-22 14:45:59.630988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.059 [2024-11-22 14:45:59.711598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.318  [2024-11-22T14:46:00.243Z] Copying: 512/512 [B] (average 250 kBps) 00:07:45.578 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ncssf8sxu5oteqeh0vhy8aoiaweks0skswteo21zf4sboi661rjoatgal39i3d251o1zq71nkrqvl5kg5giormclsfbewmh3svou72egjqz4v4xef6wgjz5v6wm4cojwt6vo1ybzu6k70uph6zzhgcjcswsd1jataqrya7ugpfvhzsqmucn6rize21rinjcoto6o05w6bs49qo2mzfrpgmj6kg8htbrrdoz29772u2d6eetfn28js4ig73fykxnlz1raeenup46jkubmd1n6hw1vfkcn24ci2jwmlo5qhzn1557uh598buqk2l1dcszadgwem1o0ekjgv7xls9tpmq56gej6h9sf58duxdsb9n6t16q5tjhod0aopfercasiv3qla03g4d0c40rbxw903ljfn29wnxxigxznycx2fubvt32swupl50oel0q2t2vzi9ge1w7lu9195eaaw7xkl0eifiubfw44riv5chfukvc9t451atsp5oo83lvstjp9 == \n\c\s\s\f\8\s\x\u\5\o\t\e\q\e\h\0\v\h\y\8\a\o\i\a\w\e\k\s\0\s\k\s\w\t\e\o\2\1\z\f\4\s\b\o\i\6\6\1\r\j\o\a\t\g\a\l\3\9\i\3\d\2\5\1\o\1\z\q\7\1\n\k\r\q\v\l\5\k\g\5\g\i\o\r\m\c\l\s\f\b\e\w\m\h\3\s\v\o\u\7\2\e\g\j\q\z\4\v\4\x\e\f\6\w\g\j\z\5\v\6\w\m\4\c\o\j\w\t\6\v\o\1\y\b\z\u\6\k\7\0\u\p\h\6\z\z\h\g\c\j\c\s\w\s\d\1\j\a\t\a\q\r\y\a\7\u\g\p\f\v\h\z\s\q\m\u\c\n\6\r\i\z\e\2\1\r\i\n\j\c\o\t\o\6\o\0\5\w\6\b\s\4\9\q\o\2\m\z\f\r\p\g\m\j\6\k\g\8\h\t\b\r\r\d\o\z\2\9\7\7\2\u\2\d\6\e\e\t\f\n\2\8\j\s\4\i\g\7\3\f\y\k\x\n\l\z\1\r\a\e\e\n\u\p\4\6\j\k\u\b\m\d\1\n\6\h\w\1\v\f\k\c\n\2\4\c\i\2\j\w\m\l\o\5\q\h\z\n\1\5\5\7\u\h\5\9\8\b\u\q\k\2\l\1\d\c\s\z\a\d\g\w\e\m\1\o\0\e\k\j\g\v\7\x\l\s\9\t\p\m\q\5\6\g\e\j\6\h\9\s\f\5\8\d\u\x\d\s\b\9\n\6\t\1\6\q\5\t\j\h\o\d\0\a\o\p\f\e\r\c\a\s\i\v\3\q\l\a\0\3\g\4\d\0\c\4\0\r\b\x\w\9\0\3\l\j\f\n\2\9\w\n\x\x\i\g\x\z\n\y\c\x\2\f\u\b\v\t\3\2\s\w\u\p\l\5\0\o\e\l\0\q\2\t\2\v\z\i\9\g\e\1\w\7\l\u\9\1\9\5\e\a\a\w\7\x\k\l\0\e\i\f\i\u\b\f\w\4\4\r\i\v\5\c\h\f\u\k\v\c\9\t\4\5\1\a\t\s\p\5\o\o\8\3\l\v\s\t\j\p\9 ]] 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.578 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:45.578 [2024-11-22 14:46:00.130159] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:45.578 [2024-11-22 14:46:00.130447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60826 ] 00:07:45.837 [2024-11-22 14:46:00.279836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.837 [2024-11-22 14:46:00.355403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.837 [2024-11-22 14:46:00.434818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.837  [2024-11-22T14:46:01.070Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.405 00:07:46.405 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1b54zi6v5rim6b3tpls159pallmn9rqxhn3zddifphq0p71vosfquszgb6arkq2h8k4pxqau1cr2qmpl7v3suclfbdb2djthx5dsmfzma0sta3cp394g7pf4ptkhq2mraqnwlu0uuxzcxc1lufh26w2d3r7vjga7cmwck5bpz8eed3zwggthlysvvsj3jssehkesifnv5nolakmb64supelm9zn848rhgjrnyb4efyvk7ndyjx4nyge9hax1mw4k1ngyutgj2dy77ocoubjzl4d8yeho2dfiaaqu8u04osxgno633bwu1eb1lm3fk91kmmzbxsuc2bplqbwhpj8dbi86hyxnlsks5qn6ai6wns3z5ju8hfsxu0gq36ouwn4dewbn3umii2n9u44qfkvwnr6mrv5tqhyjx51nv9gkpkqppk8llfk48sfbwqczfp0a4vzzprfhf2lp5gshrhog0m6sp4rtf9nhjfg8xuqogsifqisjmu707d05d1xjrrfn == \1\b\5\4\z\i\6\v\5\r\i\m\6\b\3\t\p\l\s\1\5\9\p\a\l\l\m\n\9\r\q\x\h\n\3\z\d\d\i\f\p\h\q\0\p\7\1\v\o\s\f\q\u\s\z\g\b\6\a\r\k\q\2\h\8\k\4\p\x\q\a\u\1\c\r\2\q\m\p\l\7\v\3\s\u\c\l\f\b\d\b\2\d\j\t\h\x\5\d\s\m\f\z\m\a\0\s\t\a\3\c\p\3\9\4\g\7\p\f\4\p\t\k\h\q\2\m\r\a\q\n\w\l\u\0\u\u\x\z\c\x\c\1\l\u\f\h\2\6\w\2\d\3\r\7\v\j\g\a\7\c\m\w\c\k\5\b\p\z\8\e\e\d\3\z\w\g\g\t\h\l\y\s\v\v\s\j\3\j\s\s\e\h\k\e\s\i\f\n\v\5\n\o\l\a\k\m\b\6\4\s\u\p\e\l\m\9\z\n\8\4\8\r\h\g\j\r\n\y\b\4\e\f\y\v\k\7\n\d\y\j\x\4\n\y\g\e\9\h\a\x\1\m\w\4\k\1\n\g\y\u\t\g\j\2\d\y\7\7\o\c\o\u\b\j\z\l\4\d\8\y\e\h\o\2\d\f\i\a\a\q\u\8\u\0\4\o\s\x\g\n\o\6\3\3\b\w\u\1\e\b\1\l\m\3\f\k\9\1\k\m\m\z\b\x\s\u\c\2\b\p\l\q\b\w\h\p\j\8\d\b\i\8\6\h\y\x\n\l\s\k\s\5\q\n\6\a\i\6\w\n\s\3\z\5\j\u\8\h\f\s\x\u\0\g\q\3\6\o\u\w\n\4\d\e\w\b\n\3\u\m\i\i\2\n\9\u\4\4\q\f\k\v\w\n\r\6\m\r\v\5\t\q\h\y\j\x\5\1\n\v\9\g\k\p\k\q\p\p\k\8\l\l\f\k\4\8\s\f\b\w\q\c\z\f\p\0\a\4\v\z\z\p\r\f\h\f\2\l\p\5\g\s\h\r\h\o\g\0\m\6\s\p\4\r\t\f\9\n\h\j\f\g\8\x\u\q\o\g\s\i\f\q\i\s\j\m\u\7\0\7\d\0\5\d\1\x\j\r\r\f\n ]] 00:07:46.405 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.405 14:46:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:46.405 [2024-11-22 14:46:00.844177] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:46.405 [2024-11-22 14:46:00.844297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60834 ] 00:07:46.405 [2024-11-22 14:46:00.990252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.664 [2024-11-22 14:46:01.068599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.664 [2024-11-22 14:46:01.150770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.664  [2024-11-22T14:46:01.588Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.923 00:07:46.923 14:46:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1b54zi6v5rim6b3tpls159pallmn9rqxhn3zddifphq0p71vosfquszgb6arkq2h8k4pxqau1cr2qmpl7v3suclfbdb2djthx5dsmfzma0sta3cp394g7pf4ptkhq2mraqnwlu0uuxzcxc1lufh26w2d3r7vjga7cmwck5bpz8eed3zwggthlysvvsj3jssehkesifnv5nolakmb64supelm9zn848rhgjrnyb4efyvk7ndyjx4nyge9hax1mw4k1ngyutgj2dy77ocoubjzl4d8yeho2dfiaaqu8u04osxgno633bwu1eb1lm3fk91kmmzbxsuc2bplqbwhpj8dbi86hyxnlsks5qn6ai6wns3z5ju8hfsxu0gq36ouwn4dewbn3umii2n9u44qfkvwnr6mrv5tqhyjx51nv9gkpkqppk8llfk48sfbwqczfp0a4vzzprfhf2lp5gshrhog0m6sp4rtf9nhjfg8xuqogsifqisjmu707d05d1xjrrfn == \1\b\5\4\z\i\6\v\5\r\i\m\6\b\3\t\p\l\s\1\5\9\p\a\l\l\m\n\9\r\q\x\h\n\3\z\d\d\i\f\p\h\q\0\p\7\1\v\o\s\f\q\u\s\z\g\b\6\a\r\k\q\2\h\8\k\4\p\x\q\a\u\1\c\r\2\q\m\p\l\7\v\3\s\u\c\l\f\b\d\b\2\d\j\t\h\x\5\d\s\m\f\z\m\a\0\s\t\a\3\c\p\3\9\4\g\7\p\f\4\p\t\k\h\q\2\m\r\a\q\n\w\l\u\0\u\u\x\z\c\x\c\1\l\u\f\h\2\6\w\2\d\3\r\7\v\j\g\a\7\c\m\w\c\k\5\b\p\z\8\e\e\d\3\z\w\g\g\t\h\l\y\s\v\v\s\j\3\j\s\s\e\h\k\e\s\i\f\n\v\5\n\o\l\a\k\m\b\6\4\s\u\p\e\l\m\9\z\n\8\4\8\r\h\g\j\r\n\y\b\4\e\f\y\v\k\7\n\d\y\j\x\4\n\y\g\e\9\h\a\x\1\m\w\4\k\1\n\g\y\u\t\g\j\2\d\y\7\7\o\c\o\u\b\j\z\l\4\d\8\y\e\h\o\2\d\f\i\a\a\q\u\8\u\0\4\o\s\x\g\n\o\6\3\3\b\w\u\1\e\b\1\l\m\3\f\k\9\1\k\m\m\z\b\x\s\u\c\2\b\p\l\q\b\w\h\p\j\8\d\b\i\8\6\h\y\x\n\l\s\k\s\5\q\n\6\a\i\6\w\n\s\3\z\5\j\u\8\h\f\s\x\u\0\g\q\3\6\o\u\w\n\4\d\e\w\b\n\3\u\m\i\i\2\n\9\u\4\4\q\f\k\v\w\n\r\6\m\r\v\5\t\q\h\y\j\x\5\1\n\v\9\g\k\p\k\q\p\p\k\8\l\l\f\k\4\8\s\f\b\w\q\c\z\f\p\0\a\4\v\z\z\p\r\f\h\f\2\l\p\5\g\s\h\r\h\o\g\0\m\6\s\p\4\r\t\f\9\n\h\j\f\g\8\x\u\q\o\g\s\i\f\q\i\s\j\m\u\7\0\7\d\0\5\d\1\x\j\r\r\f\n ]] 00:07:46.923 14:46:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.923 14:46:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:46.923 [2024-11-22 14:46:01.576882] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:46.923 [2024-11-22 14:46:01.576983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60847 ] 00:07:47.182 [2024-11-22 14:46:01.724496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.182 [2024-11-22 14:46:01.780036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.442 [2024-11-22 14:46:01.860686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.442  [2024-11-22T14:46:02.366Z] Copying: 512/512 [B] (average 55 kBps) 00:07:47.701 00:07:47.701 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1b54zi6v5rim6b3tpls159pallmn9rqxhn3zddifphq0p71vosfquszgb6arkq2h8k4pxqau1cr2qmpl7v3suclfbdb2djthx5dsmfzma0sta3cp394g7pf4ptkhq2mraqnwlu0uuxzcxc1lufh26w2d3r7vjga7cmwck5bpz8eed3zwggthlysvvsj3jssehkesifnv5nolakmb64supelm9zn848rhgjrnyb4efyvk7ndyjx4nyge9hax1mw4k1ngyutgj2dy77ocoubjzl4d8yeho2dfiaaqu8u04osxgno633bwu1eb1lm3fk91kmmzbxsuc2bplqbwhpj8dbi86hyxnlsks5qn6ai6wns3z5ju8hfsxu0gq36ouwn4dewbn3umii2n9u44qfkvwnr6mrv5tqhyjx51nv9gkpkqppk8llfk48sfbwqczfp0a4vzzprfhf2lp5gshrhog0m6sp4rtf9nhjfg8xuqogsifqisjmu707d05d1xjrrfn == \1\b\5\4\z\i\6\v\5\r\i\m\6\b\3\t\p\l\s\1\5\9\p\a\l\l\m\n\9\r\q\x\h\n\3\z\d\d\i\f\p\h\q\0\p\7\1\v\o\s\f\q\u\s\z\g\b\6\a\r\k\q\2\h\8\k\4\p\x\q\a\u\1\c\r\2\q\m\p\l\7\v\3\s\u\c\l\f\b\d\b\2\d\j\t\h\x\5\d\s\m\f\z\m\a\0\s\t\a\3\c\p\3\9\4\g\7\p\f\4\p\t\k\h\q\2\m\r\a\q\n\w\l\u\0\u\u\x\z\c\x\c\1\l\u\f\h\2\6\w\2\d\3\r\7\v\j\g\a\7\c\m\w\c\k\5\b\p\z\8\e\e\d\3\z\w\g\g\t\h\l\y\s\v\v\s\j\3\j\s\s\e\h\k\e\s\i\f\n\v\5\n\o\l\a\k\m\b\6\4\s\u\p\e\l\m\9\z\n\8\4\8\r\h\g\j\r\n\y\b\4\e\f\y\v\k\7\n\d\y\j\x\4\n\y\g\e\9\h\a\x\1\m\w\4\k\1\n\g\y\u\t\g\j\2\d\y\7\7\o\c\o\u\b\j\z\l\4\d\8\y\e\h\o\2\d\f\i\a\a\q\u\8\u\0\4\o\s\x\g\n\o\6\3\3\b\w\u\1\e\b\1\l\m\3\f\k\9\1\k\m\m\z\b\x\s\u\c\2\b\p\l\q\b\w\h\p\j\8\d\b\i\8\6\h\y\x\n\l\s\k\s\5\q\n\6\a\i\6\w\n\s\3\z\5\j\u\8\h\f\s\x\u\0\g\q\3\6\o\u\w\n\4\d\e\w\b\n\3\u\m\i\i\2\n\9\u\4\4\q\f\k\v\w\n\r\6\m\r\v\5\t\q\h\y\j\x\5\1\n\v\9\g\k\p\k\q\p\p\k\8\l\l\f\k\4\8\s\f\b\w\q\c\z\f\p\0\a\4\v\z\z\p\r\f\h\f\2\l\p\5\g\s\h\r\h\o\g\0\m\6\s\p\4\r\t\f\9\n\h\j\f\g\8\x\u\q\o\g\s\i\f\q\i\s\j\m\u\7\0\7\d\0\5\d\1\x\j\r\r\f\n ]] 00:07:47.701 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.701 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:47.701 [2024-11-22 14:46:02.284256] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:47.701 [2024-11-22 14:46:02.284354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:07:47.959 [2024-11-22 14:46:02.427727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.959 [2024-11-22 14:46:02.485552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.959 [2024-11-22 14:46:02.565100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.959  [2024-11-22T14:46:03.192Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.527 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 1b54zi6v5rim6b3tpls159pallmn9rqxhn3zddifphq0p71vosfquszgb6arkq2h8k4pxqau1cr2qmpl7v3suclfbdb2djthx5dsmfzma0sta3cp394g7pf4ptkhq2mraqnwlu0uuxzcxc1lufh26w2d3r7vjga7cmwck5bpz8eed3zwggthlysvvsj3jssehkesifnv5nolakmb64supelm9zn848rhgjrnyb4efyvk7ndyjx4nyge9hax1mw4k1ngyutgj2dy77ocoubjzl4d8yeho2dfiaaqu8u04osxgno633bwu1eb1lm3fk91kmmzbxsuc2bplqbwhpj8dbi86hyxnlsks5qn6ai6wns3z5ju8hfsxu0gq36ouwn4dewbn3umii2n9u44qfkvwnr6mrv5tqhyjx51nv9gkpkqppk8llfk48sfbwqczfp0a4vzzprfhf2lp5gshrhog0m6sp4rtf9nhjfg8xuqogsifqisjmu707d05d1xjrrfn == \1\b\5\4\z\i\6\v\5\r\i\m\6\b\3\t\p\l\s\1\5\9\p\a\l\l\m\n\9\r\q\x\h\n\3\z\d\d\i\f\p\h\q\0\p\7\1\v\o\s\f\q\u\s\z\g\b\6\a\r\k\q\2\h\8\k\4\p\x\q\a\u\1\c\r\2\q\m\p\l\7\v\3\s\u\c\l\f\b\d\b\2\d\j\t\h\x\5\d\s\m\f\z\m\a\0\s\t\a\3\c\p\3\9\4\g\7\p\f\4\p\t\k\h\q\2\m\r\a\q\n\w\l\u\0\u\u\x\z\c\x\c\1\l\u\f\h\2\6\w\2\d\3\r\7\v\j\g\a\7\c\m\w\c\k\5\b\p\z\8\e\e\d\3\z\w\g\g\t\h\l\y\s\v\v\s\j\3\j\s\s\e\h\k\e\s\i\f\n\v\5\n\o\l\a\k\m\b\6\4\s\u\p\e\l\m\9\z\n\8\4\8\r\h\g\j\r\n\y\b\4\e\f\y\v\k\7\n\d\y\j\x\4\n\y\g\e\9\h\a\x\1\m\w\4\k\1\n\g\y\u\t\g\j\2\d\y\7\7\o\c\o\u\b\j\z\l\4\d\8\y\e\h\o\2\d\f\i\a\a\q\u\8\u\0\4\o\s\x\g\n\o\6\3\3\b\w\u\1\e\b\1\l\m\3\f\k\9\1\k\m\m\z\b\x\s\u\c\2\b\p\l\q\b\w\h\p\j\8\d\b\i\8\6\h\y\x\n\l\s\k\s\5\q\n\6\a\i\6\w\n\s\3\z\5\j\u\8\h\f\s\x\u\0\g\q\3\6\o\u\w\n\4\d\e\w\b\n\3\u\m\i\i\2\n\9\u\4\4\q\f\k\v\w\n\r\6\m\r\v\5\t\q\h\y\j\x\5\1\n\v\9\g\k\p\k\q\p\p\k\8\l\l\f\k\4\8\s\f\b\w\q\c\z\f\p\0\a\4\v\z\z\p\r\f\h\f\2\l\p\5\g\s\h\r\h\o\g\0\m\6\s\p\4\r\t\f\9\n\h\j\f\g\8\x\u\q\o\g\s\i\f\q\i\s\j\m\u\7\0\7\d\0\5\d\1\x\j\r\r\f\n ]] 00:07:48.527 00:07:48.527 real 0m5.807s 00:07:48.527 user 0m3.256s 00:07:48.527 sys 0m1.538s 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.527 ************************************ 00:07:48.527 END TEST dd_flags_misc_forced_aio 00:07:48.527 ************************************ 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:48.527 ************************************ 00:07:48.527 END TEST spdk_dd_posix 00:07:48.527 ************************************ 00:07:48.527 00:07:48.527 real 0m25.571s 00:07:48.527 user 0m13.224s 00:07:48.527 sys 0m8.942s 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.527 14:46:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:48.527 14:46:03 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:48.527 14:46:03 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.527 14:46:03 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.527 14:46:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:48.527 ************************************ 00:07:48.527 START TEST spdk_dd_malloc 00:07:48.527 ************************************ 00:07:48.527 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:48.527 * Looking for test storage... 00:07:48.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:48.527 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.528 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.814 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.815 --rc genhtml_branch_coverage=1 00:07:48.815 --rc genhtml_function_coverage=1 00:07:48.815 --rc genhtml_legend=1 00:07:48.815 --rc geninfo_all_blocks=1 00:07:48.815 --rc geninfo_unexecuted_blocks=1 00:07:48.815 00:07:48.815 ' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.815 --rc genhtml_branch_coverage=1 00:07:48.815 --rc genhtml_function_coverage=1 00:07:48.815 --rc genhtml_legend=1 00:07:48.815 --rc geninfo_all_blocks=1 00:07:48.815 --rc geninfo_unexecuted_blocks=1 00:07:48.815 00:07:48.815 ' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.815 --rc genhtml_branch_coverage=1 00:07:48.815 --rc genhtml_function_coverage=1 00:07:48.815 --rc genhtml_legend=1 00:07:48.815 --rc geninfo_all_blocks=1 00:07:48.815 --rc geninfo_unexecuted_blocks=1 00:07:48.815 00:07:48.815 ' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.815 --rc genhtml_branch_coverage=1 00:07:48.815 --rc genhtml_function_coverage=1 00:07:48.815 --rc genhtml_legend=1 00:07:48.815 --rc geninfo_all_blocks=1 00:07:48.815 --rc geninfo_unexecuted_blocks=1 00:07:48.815 00:07:48.815 ' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:48.815 ************************************ 00:07:48.815 START TEST dd_malloc_copy 00:07:48.815 ************************************ 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:48.815 14:46:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:48.815 [2024-11-22 14:46:03.276212] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:48.815 [2024-11-22 14:46:03.276518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60939 ] 00:07:48.815 { 00:07:48.815 "subsystems": [ 00:07:48.815 { 00:07:48.815 "subsystem": "bdev", 00:07:48.815 "config": [ 00:07:48.815 { 00:07:48.815 "params": { 00:07:48.815 "block_size": 512, 00:07:48.815 "num_blocks": 1048576, 00:07:48.815 "name": "malloc0" 00:07:48.815 }, 00:07:48.815 "method": "bdev_malloc_create" 00:07:48.815 }, 00:07:48.815 { 00:07:48.815 "params": { 00:07:48.815 "block_size": 512, 00:07:48.815 "num_blocks": 1048576, 00:07:48.815 "name": "malloc1" 00:07:48.815 }, 00:07:48.815 "method": "bdev_malloc_create" 00:07:48.815 }, 00:07:48.815 { 00:07:48.815 "method": "bdev_wait_for_examine" 00:07:48.815 } 00:07:48.815 ] 00:07:48.815 } 00:07:48.815 ] 00:07:48.815 } 00:07:48.815 [2024-11-22 14:46:03.421248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.074 [2024-11-22 14:46:03.506364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.074 [2024-11-22 14:46:03.586421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.455  [2024-11-22T14:46:06.056Z] Copying: 206/512 [MB] (206 MBps) [2024-11-22T14:46:06.623Z] Copying: 428/512 [MB] (222 MBps) [2024-11-22T14:46:07.559Z] Copying: 512/512 [MB] (average 216 MBps) 00:07:52.894 00:07:52.894 14:46:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:52.894 14:46:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:52.894 14:46:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:52.894 14:46:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.894 [2024-11-22 14:46:07.355844] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:52.894 [2024-11-22 14:46:07.355957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60987 ] 00:07:52.894 { 00:07:52.894 "subsystems": [ 00:07:52.894 { 00:07:52.894 "subsystem": "bdev", 00:07:52.894 "config": [ 00:07:52.894 { 00:07:52.894 "params": { 00:07:52.894 "block_size": 512, 00:07:52.894 "num_blocks": 1048576, 00:07:52.894 "name": "malloc0" 00:07:52.894 }, 00:07:52.894 "method": "bdev_malloc_create" 00:07:52.894 }, 00:07:52.894 { 00:07:52.894 "params": { 00:07:52.894 "block_size": 512, 00:07:52.894 "num_blocks": 1048576, 00:07:52.894 "name": "malloc1" 00:07:52.894 }, 00:07:52.894 "method": "bdev_malloc_create" 00:07:52.894 }, 00:07:52.894 { 00:07:52.894 "method": "bdev_wait_for_examine" 00:07:52.894 } 00:07:52.894 ] 00:07:52.894 } 00:07:52.894 ] 00:07:52.894 } 00:07:52.894 [2024-11-22 14:46:07.500967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.894 [2024-11-22 14:46:07.554470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.152 [2024-11-22 14:46:07.636723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.527  [2024-11-22T14:46:10.566Z] Copying: 220/512 [MB] (220 MBps) [2024-11-22T14:46:10.566Z] Copying: 443/512 [MB] (222 MBps) [2024-11-22T14:46:11.500Z] Copying: 512/512 [MB] (average 223 MBps) 00:07:56.835 00:07:56.835 ************************************ 00:07:56.835 END TEST dd_malloc_copy 00:07:56.835 ************************************ 00:07:56.835 00:07:56.835 real 0m8.055s 00:07:56.835 user 0m6.703s 00:07:56.835 sys 0m1.191s 00:07:56.835 14:46:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.835 14:46:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.835 ************************************ 00:07:56.835 END TEST spdk_dd_malloc 00:07:56.835 ************************************ 00:07:56.835 00:07:56.835 real 0m8.309s 00:07:56.836 user 0m6.841s 00:07:56.836 sys 0m1.310s 00:07:56.836 14:46:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.836 14:46:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:56.836 14:46:11 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:56.836 14:46:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:56.836 14:46:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.836 14:46:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.836 ************************************ 00:07:56.836 START TEST spdk_dd_bdev_to_bdev 00:07:56.836 ************************************ 00:07:56.836 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:56.836 * Looking for test storage... 00:07:56.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.836 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.836 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.836 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.094 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.094 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.094 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.095 --rc genhtml_branch_coverage=1 00:07:57.095 --rc genhtml_function_coverage=1 00:07:57.095 --rc genhtml_legend=1 00:07:57.095 --rc geninfo_all_blocks=1 00:07:57.095 --rc geninfo_unexecuted_blocks=1 00:07:57.095 00:07:57.095 ' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.095 --rc genhtml_branch_coverage=1 00:07:57.095 --rc genhtml_function_coverage=1 00:07:57.095 --rc genhtml_legend=1 00:07:57.095 --rc geninfo_all_blocks=1 00:07:57.095 --rc geninfo_unexecuted_blocks=1 00:07:57.095 00:07:57.095 ' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.095 --rc genhtml_branch_coverage=1 00:07:57.095 --rc genhtml_function_coverage=1 00:07:57.095 --rc genhtml_legend=1 00:07:57.095 --rc geninfo_all_blocks=1 00:07:57.095 --rc geninfo_unexecuted_blocks=1 00:07:57.095 00:07:57.095 ' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.095 --rc genhtml_branch_coverage=1 00:07:57.095 --rc genhtml_function_coverage=1 00:07:57.095 --rc genhtml_legend=1 00:07:57.095 --rc geninfo_all_blocks=1 00:07:57.095 --rc geninfo_unexecuted_blocks=1 00:07:57.095 00:07:57.095 ' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.095 ************************************ 00:07:57.095 START TEST dd_inflate_file 00:07:57.095 ************************************ 00:07:57.095 14:46:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:57.095 [2024-11-22 14:46:11.625268] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:57.095 [2024-11-22 14:46:11.625384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61115 ] 00:07:57.354 [2024-11-22 14:46:11.771489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.354 [2024-11-22 14:46:11.824987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.354 [2024-11-22 14:46:11.900930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.354  [2024-11-22T14:46:12.277Z] Copying: 64/64 [MB] (average 1306 MBps) 00:07:57.612 00:07:57.870 ************************************ 00:07:57.870 END TEST dd_inflate_file 00:07:57.870 ************************************ 00:07:57.870 00:07:57.870 real 0m0.707s 00:07:57.870 user 0m0.420s 00:07:57.870 sys 0m0.400s 00:07:57.870 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.870 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:57.870 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:57.870 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:57.871 ************************************ 00:07:57.871 START TEST dd_copy_to_out_bdev 00:07:57.871 ************************************ 00:07:57.871 14:46:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:57.871 { 00:07:57.871 "subsystems": [ 00:07:57.871 { 00:07:57.871 "subsystem": "bdev", 00:07:57.871 "config": [ 00:07:57.871 { 00:07:57.871 "params": { 00:07:57.871 "trtype": "pcie", 00:07:57.871 "traddr": "0000:00:10.0", 00:07:57.871 "name": "Nvme0" 00:07:57.871 }, 00:07:57.871 "method": "bdev_nvme_attach_controller" 00:07:57.871 }, 00:07:57.871 { 00:07:57.871 "params": { 00:07:57.871 "trtype": "pcie", 00:07:57.871 "traddr": "0000:00:11.0", 00:07:57.871 "name": "Nvme1" 00:07:57.871 }, 00:07:57.871 "method": "bdev_nvme_attach_controller" 00:07:57.871 }, 00:07:57.871 { 00:07:57.871 "method": "bdev_wait_for_examine" 00:07:57.871 } 00:07:57.871 ] 00:07:57.871 } 00:07:57.871 ] 00:07:57.871 } 00:07:57.871 [2024-11-22 14:46:12.413826] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:07:57.871 [2024-11-22 14:46:12.414176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:07:58.129 [2024-11-22 14:46:12.563352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.129 [2024-11-22 14:46:12.624313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.129 [2024-11-22 14:46:12.698917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.506  [2024-11-22T14:46:14.171Z] Copying: 51/64 [MB] (51 MBps) [2024-11-22T14:46:14.738Z] Copying: 64/64 [MB] (average 51 MBps) 00:08:00.073 00:08:00.073 00:08:00.073 real 0m2.101s 00:08:00.073 user 0m1.815s 00:08:00.073 sys 0m1.693s 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 ************************************ 00:08:00.073 END TEST dd_copy_to_out_bdev 00:08:00.073 ************************************ 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 ************************************ 00:08:00.073 START TEST dd_offset_magic 00:08:00.073 ************************************ 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:00.073 14:46:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:00.073 [2024-11-22 14:46:14.542732] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:00.073 [2024-11-22 14:46:14.542856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61189 ] 00:08:00.073 { 00:08:00.073 "subsystems": [ 00:08:00.073 { 00:08:00.073 "subsystem": "bdev", 00:08:00.073 "config": [ 00:08:00.073 { 00:08:00.073 "params": { 00:08:00.073 "trtype": "pcie", 00:08:00.073 "traddr": "0000:00:10.0", 00:08:00.073 "name": "Nvme0" 00:08:00.073 }, 00:08:00.073 "method": "bdev_nvme_attach_controller" 00:08:00.073 }, 00:08:00.073 { 00:08:00.073 "params": { 00:08:00.073 "trtype": "pcie", 00:08:00.073 "traddr": "0000:00:11.0", 00:08:00.073 "name": "Nvme1" 00:08:00.073 }, 00:08:00.073 "method": "bdev_nvme_attach_controller" 00:08:00.073 }, 00:08:00.073 { 00:08:00.073 "method": "bdev_wait_for_examine" 00:08:00.073 } 00:08:00.073 ] 00:08:00.073 } 00:08:00.073 ] 00:08:00.073 } 00:08:00.073 [2024-11-22 14:46:14.683687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.332 [2024-11-22 14:46:14.756670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.332 [2024-11-22 14:46:14.833197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.591  [2024-11-22T14:46:15.514Z] Copying: 65/65 [MB] (average 928 MBps) 00:08:00.849 00:08:00.849 14:46:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:00.849 14:46:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:00.849 14:46:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:00.849 14:46:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:00.849 [2024-11-22 14:46:15.471988] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:00.849 [2024-11-22 14:46:15.472090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61209 ] 00:08:00.849 { 00:08:00.849 "subsystems": [ 00:08:00.849 { 00:08:00.849 "subsystem": "bdev", 00:08:00.849 "config": [ 00:08:00.849 { 00:08:00.849 "params": { 00:08:00.849 "trtype": "pcie", 00:08:00.849 "traddr": "0000:00:10.0", 00:08:00.849 "name": "Nvme0" 00:08:00.849 }, 00:08:00.849 "method": "bdev_nvme_attach_controller" 00:08:00.849 }, 00:08:00.849 { 00:08:00.849 "params": { 00:08:00.849 "trtype": "pcie", 00:08:00.849 "traddr": "0000:00:11.0", 00:08:00.849 "name": "Nvme1" 00:08:00.849 }, 00:08:00.849 "method": "bdev_nvme_attach_controller" 00:08:00.849 }, 00:08:00.849 { 00:08:00.849 "method": "bdev_wait_for_examine" 00:08:00.849 } 00:08:00.849 ] 00:08:00.849 } 00:08:00.849 ] 00:08:00.849 } 00:08:01.107 [2024-11-22 14:46:15.613103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.107 [2024-11-22 14:46:15.667424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.107 [2024-11-22 14:46:15.741023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.415  [2024-11-22T14:46:16.350Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:01.685 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:01.685 14:46:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:01.685 [2024-11-22 14:46:16.258020] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:01.685 [2024-11-22 14:46:16.258126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61231 ] 00:08:01.685 { 00:08:01.685 "subsystems": [ 00:08:01.685 { 00:08:01.685 "subsystem": "bdev", 00:08:01.685 "config": [ 00:08:01.685 { 00:08:01.685 "params": { 00:08:01.685 "trtype": "pcie", 00:08:01.685 "traddr": "0000:00:10.0", 00:08:01.685 "name": "Nvme0" 00:08:01.685 }, 00:08:01.685 "method": "bdev_nvme_attach_controller" 00:08:01.685 }, 00:08:01.685 { 00:08:01.685 "params": { 00:08:01.685 "trtype": "pcie", 00:08:01.685 "traddr": "0000:00:11.0", 00:08:01.685 "name": "Nvme1" 00:08:01.685 }, 00:08:01.686 "method": "bdev_nvme_attach_controller" 00:08:01.686 }, 00:08:01.686 { 00:08:01.686 "method": "bdev_wait_for_examine" 00:08:01.686 } 00:08:01.686 ] 00:08:01.686 } 00:08:01.686 ] 00:08:01.686 } 00:08:01.944 [2024-11-22 14:46:16.401008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.944 [2024-11-22 14:46:16.454817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.944 [2024-11-22 14:46:16.530581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.203  [2024-11-22T14:46:17.127Z] Copying: 65/65 [MB] (average 1015 MBps) 00:08:02.462 00:08:02.462 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:02.462 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:02.462 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:02.462 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:02.720 [2024-11-22 14:46:17.176686] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:02.720 [2024-11-22 14:46:17.176802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:08:02.720 { 00:08:02.720 "subsystems": [ 00:08:02.720 { 00:08:02.720 "subsystem": "bdev", 00:08:02.720 "config": [ 00:08:02.720 { 00:08:02.720 "params": { 00:08:02.720 "trtype": "pcie", 00:08:02.720 "traddr": "0000:00:10.0", 00:08:02.720 "name": "Nvme0" 00:08:02.720 }, 00:08:02.720 "method": "bdev_nvme_attach_controller" 00:08:02.720 }, 00:08:02.720 { 00:08:02.720 "params": { 00:08:02.720 "trtype": "pcie", 00:08:02.720 "traddr": "0000:00:11.0", 00:08:02.720 "name": "Nvme1" 00:08:02.720 }, 00:08:02.720 "method": "bdev_nvme_attach_controller" 00:08:02.720 }, 00:08:02.720 { 00:08:02.721 "method": "bdev_wait_for_examine" 00:08:02.721 } 00:08:02.721 ] 00:08:02.721 } 00:08:02.721 ] 00:08:02.721 } 00:08:02.721 [2024-11-22 14:46:17.324060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.721 [2024-11-22 14:46:17.378640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.979 [2024-11-22 14:46:17.457929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.236  [2024-11-22T14:46:18.160Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:03.495 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:03.495 00:08:03.495 real 0m3.439s 00:08:03.495 user 0m2.429s 00:08:03.495 sys 0m1.180s 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:03.495 ************************************ 00:08:03.495 END TEST dd_offset_magic 00:08:03.495 ************************************ 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:03.495 14:46:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.495 [2024-11-22 14:46:18.030534] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:03.495 [2024-11-22 14:46:18.030809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:08:03.495 { 00:08:03.495 "subsystems": [ 00:08:03.495 { 00:08:03.495 "subsystem": "bdev", 00:08:03.495 "config": [ 00:08:03.495 { 00:08:03.495 "params": { 00:08:03.495 "trtype": "pcie", 00:08:03.495 "traddr": "0000:00:10.0", 00:08:03.495 "name": "Nvme0" 00:08:03.495 }, 00:08:03.495 "method": "bdev_nvme_attach_controller" 00:08:03.495 }, 00:08:03.495 { 00:08:03.495 "params": { 00:08:03.495 "trtype": "pcie", 00:08:03.495 "traddr": "0000:00:11.0", 00:08:03.495 "name": "Nvme1" 00:08:03.495 }, 00:08:03.495 "method": "bdev_nvme_attach_controller" 00:08:03.495 }, 00:08:03.495 { 00:08:03.495 "method": "bdev_wait_for_examine" 00:08:03.495 } 00:08:03.495 ] 00:08:03.495 } 00:08:03.495 ] 00:08:03.495 } 00:08:03.754 [2024-11-22 14:46:18.177136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.754 [2024-11-22 14:46:18.240239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.754 [2024-11-22 14:46:18.317462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.012  [2024-11-22T14:46:18.937Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:04.272 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:04.272 14:46:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.272 { 00:08:04.272 "subsystems": [ 00:08:04.272 { 00:08:04.272 "subsystem": "bdev", 00:08:04.272 "config": [ 00:08:04.272 { 00:08:04.272 "params": { 00:08:04.272 "trtype": "pcie", 00:08:04.272 "traddr": "0000:00:10.0", 00:08:04.272 "name": "Nvme0" 00:08:04.272 }, 00:08:04.272 "method": "bdev_nvme_attach_controller" 00:08:04.272 }, 00:08:04.272 { 00:08:04.272 "params": { 00:08:04.272 "trtype": "pcie", 00:08:04.272 "traddr": "0000:00:11.0", 00:08:04.272 "name": "Nvme1" 00:08:04.272 }, 00:08:04.272 "method": "bdev_nvme_attach_controller" 00:08:04.272 }, 00:08:04.272 { 00:08:04.272 "method": "bdev_wait_for_examine" 00:08:04.272 } 00:08:04.272 ] 00:08:04.272 } 00:08:04.272 ] 00:08:04.272 } 00:08:04.272 [2024-11-22 14:46:18.846481] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:04.272 [2024-11-22 14:46:18.846918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:08:04.531 [2024-11-22 14:46:18.995010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.531 [2024-11-22 14:46:19.073246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.531 [2024-11-22 14:46:19.149474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.789  [2024-11-22T14:46:19.713Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:08:05.048 00:08:05.048 14:46:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:05.048 ************************************ 00:08:05.048 END TEST spdk_dd_bdev_to_bdev 00:08:05.048 ************************************ 00:08:05.048 00:08:05.048 real 0m8.288s 00:08:05.048 user 0m5.971s 00:08:05.048 sys 0m4.155s 00:08:05.048 14:46:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.048 14:46:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:05.048 14:46:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:05.048 14:46:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:05.048 14:46:19 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.048 14:46:19 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.048 14:46:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.308 ************************************ 00:08:05.308 START TEST spdk_dd_uring 00:08:05.308 ************************************ 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:05.308 * Looking for test storage... 00:08:05.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.308 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.309 --rc genhtml_branch_coverage=1 00:08:05.309 --rc genhtml_function_coverage=1 00:08:05.309 --rc genhtml_legend=1 00:08:05.309 --rc geninfo_all_blocks=1 00:08:05.309 --rc geninfo_unexecuted_blocks=1 00:08:05.309 00:08:05.309 ' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.309 --rc genhtml_branch_coverage=1 00:08:05.309 --rc genhtml_function_coverage=1 00:08:05.309 --rc genhtml_legend=1 00:08:05.309 --rc geninfo_all_blocks=1 00:08:05.309 --rc geninfo_unexecuted_blocks=1 00:08:05.309 00:08:05.309 ' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.309 --rc genhtml_branch_coverage=1 00:08:05.309 --rc genhtml_function_coverage=1 00:08:05.309 --rc genhtml_legend=1 00:08:05.309 --rc geninfo_all_blocks=1 00:08:05.309 --rc geninfo_unexecuted_blocks=1 00:08:05.309 00:08:05.309 ' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.309 --rc genhtml_branch_coverage=1 00:08:05.309 --rc genhtml_function_coverage=1 00:08:05.309 --rc genhtml_legend=1 00:08:05.309 --rc geninfo_all_blocks=1 00:08:05.309 --rc geninfo_unexecuted_blocks=1 00:08:05.309 00:08:05.309 ' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:05.309 ************************************ 00:08:05.309 START TEST dd_uring_copy 00:08:05.309 ************************************ 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:05.309 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.569 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=8q1v1oyxgyh1wabyaeq4lul97ou5w08egrni155bb6gy4ht7kcgszcoqprwuoovhh17mfcjkd8mml3jnrul8adszons8vdkuj3sk7301tvzfzrvqvpp6rt60o7xajngivrimtoy5oo3ntfpyhsbo7cvf82cuniiqfbhd2ppztszxad59nstabjhspl1j7qcp002gdp0xfkklpsrdht8at8rdplsi06xe5le506zfxobhbe3shkj3ncvmm42q1r53hbrn4whjm04y5lsx9ls9lbqyo26dyvxi43ekndhda7zpreej7ln9ulptu8z92ibcchpqstn5ybg470yzhclco7bxj15t45w9ouxwugn1bli2vwn16kcqygvewaapi1whbq9hlm07x2juzooo16ota7unccf716wfr8skf55k6q8bsuej3v1i3h60l6327326vzfjnmrrxwb5tset5l5b1qia5ht114alkg2o2xlvtk8mjd7dcfgjugaun1heze0t8q72orl0bxmvg1867wydjsg0yipasvxojjl3b6hfwx2gqm2rn27r1gtzwcpd1c6azu5sa3xit6rvtynjpbcl322z6xnyp7rm9d2pcb8xpmuyd7vfl9twpcxroa39zxidgrnnqpz7q2oke810qfz665nmu8bfe9j5qltsr5xp6cdp2jc3a9jc5kktghv18q5eglvipz56gxhtvxymhl55ym223pdh65i6y98rppj7qe2riynce3owqjvdzfvn28nlcfr9rub5g5cla7uocpjg5lqo7j1r5zog39l0r7yr7cajji5cbv3w4nezjwzsn2txz0k3c4s4e7g2h6nl3gqgjlp0mjh3g5zsfwe4j90cv0xfwe164c1w3n4svgko3x9ws3p1gi41xqafoll6mn0gq96d88ytmzdq3a3s9zgfgi8mit9yai64kh8rehseyqhvg95xmehhtesjbnzats9rm1lv69y84at84da2gwbayfyct5jovs1u9jwuvnc4x9h4 00:08:05.569 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 8q1v1oyxgyh1wabyaeq4lul97ou5w08egrni155bb6gy4ht7kcgszcoqprwuoovhh17mfcjkd8mml3jnrul8adszons8vdkuj3sk7301tvzfzrvqvpp6rt60o7xajngivrimtoy5oo3ntfpyhsbo7cvf82cuniiqfbhd2ppztszxad59nstabjhspl1j7qcp002gdp0xfkklpsrdht8at8rdplsi06xe5le506zfxobhbe3shkj3ncvmm42q1r53hbrn4whjm04y5lsx9ls9lbqyo26dyvxi43ekndhda7zpreej7ln9ulptu8z92ibcchpqstn5ybg470yzhclco7bxj15t45w9ouxwugn1bli2vwn16kcqygvewaapi1whbq9hlm07x2juzooo16ota7unccf716wfr8skf55k6q8bsuej3v1i3h60l6327326vzfjnmrrxwb5tset5l5b1qia5ht114alkg2o2xlvtk8mjd7dcfgjugaun1heze0t8q72orl0bxmvg1867wydjsg0yipasvxojjl3b6hfwx2gqm2rn27r1gtzwcpd1c6azu5sa3xit6rvtynjpbcl322z6xnyp7rm9d2pcb8xpmuyd7vfl9twpcxroa39zxidgrnnqpz7q2oke810qfz665nmu8bfe9j5qltsr5xp6cdp2jc3a9jc5kktghv18q5eglvipz56gxhtvxymhl55ym223pdh65i6y98rppj7qe2riynce3owqjvdzfvn28nlcfr9rub5g5cla7uocpjg5lqo7j1r5zog39l0r7yr7cajji5cbv3w4nezjwzsn2txz0k3c4s4e7g2h6nl3gqgjlp0mjh3g5zsfwe4j90cv0xfwe164c1w3n4svgko3x9ws3p1gi41xqafoll6mn0gq96d88ytmzdq3a3s9zgfgi8mit9yai64kh8rehseyqhvg95xmehhtesjbnzats9rm1lv69y84at84da2gwbayfyct5jovs1u9jwuvnc4x9h4 00:08:05.569 14:46:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:05.569 [2024-11-22 14:46:20.035145] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:05.569 [2024-11-22 14:46:20.035254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:08:05.569 [2024-11-22 14:46:20.179631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.828 [2024-11-22 14:46:20.255706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.828 [2024-11-22 14:46:20.340920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.764  [2024-11-22T14:46:21.996Z] Copying: 511/511 [MB] (average 1150 MBps) 00:08:07.331 00:08:07.331 14:46:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:07.331 14:46:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:07.331 14:46:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:07.331 14:46:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:07.331 [2024-11-22 14:46:21.820920] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:07.331 [2024-11-22 14:46:21.821075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:08:07.331 { 00:08:07.331 "subsystems": [ 00:08:07.331 { 00:08:07.331 "subsystem": "bdev", 00:08:07.331 "config": [ 00:08:07.331 { 00:08:07.331 "params": { 00:08:07.331 "block_size": 512, 00:08:07.331 "num_blocks": 1048576, 00:08:07.331 "name": "malloc0" 00:08:07.331 }, 00:08:07.331 "method": "bdev_malloc_create" 00:08:07.331 }, 00:08:07.331 { 00:08:07.331 "params": { 00:08:07.331 "filename": "/dev/zram1", 00:08:07.331 "name": "uring0" 00:08:07.331 }, 00:08:07.331 "method": "bdev_uring_create" 00:08:07.331 }, 00:08:07.331 { 00:08:07.331 "method": "bdev_wait_for_examine" 00:08:07.331 } 00:08:07.331 ] 00:08:07.331 } 00:08:07.331 ] 00:08:07.331 } 00:08:07.331 [2024-11-22 14:46:21.972237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.590 [2024-11-22 14:46:22.039681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.590 [2024-11-22 14:46:22.120360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.027  [2024-11-22T14:46:24.623Z] Copying: 196/512 [MB] (196 MBps) [2024-11-22T14:46:25.189Z] Copying: 396/512 [MB] (200 MBps) [2024-11-22T14:46:25.756Z] Copying: 512/512 [MB] (average 198 MBps) 00:08:11.091 00:08:11.091 14:46:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:11.091 14:46:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:11.091 14:46:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:11.091 14:46:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.091 { 00:08:11.091 "subsystems": [ 00:08:11.091 { 00:08:11.091 "subsystem": "bdev", 00:08:11.091 "config": [ 00:08:11.091 { 00:08:11.091 "params": { 00:08:11.091 "block_size": 512, 00:08:11.091 "num_blocks": 1048576, 00:08:11.091 "name": "malloc0" 00:08:11.091 }, 00:08:11.091 "method": "bdev_malloc_create" 00:08:11.091 }, 00:08:11.091 { 00:08:11.091 "params": { 00:08:11.091 "filename": "/dev/zram1", 00:08:11.091 "name": "uring0" 00:08:11.091 }, 00:08:11.091 "method": "bdev_uring_create" 00:08:11.091 }, 00:08:11.091 { 00:08:11.091 "method": "bdev_wait_for_examine" 00:08:11.091 } 00:08:11.091 ] 00:08:11.091 } 00:08:11.091 ] 00:08:11.091 } 00:08:11.091 [2024-11-22 14:46:25.651928] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:11.091 [2024-11-22 14:46:25.652034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61458 ] 00:08:11.350 [2024-11-22 14:46:25.799049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.350 [2024-11-22 14:46:25.889356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.350 [2024-11-22 14:46:25.976398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.725  [2024-11-22T14:46:28.325Z] Copying: 178/512 [MB] (178 MBps) [2024-11-22T14:46:29.701Z] Copying: 340/512 [MB] (162 MBps) [2024-11-22T14:46:29.701Z] Copying: 489/512 [MB] (148 MBps) [2024-11-22T14:46:30.268Z] Copying: 512/512 [MB] (average 163 MBps) 00:08:15.603 00:08:15.603 14:46:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:15.603 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 8q1v1oyxgyh1wabyaeq4lul97ou5w08egrni155bb6gy4ht7kcgszcoqprwuoovhh17mfcjkd8mml3jnrul8adszons8vdkuj3sk7301tvzfzrvqvpp6rt60o7xajngivrimtoy5oo3ntfpyhsbo7cvf82cuniiqfbhd2ppztszxad59nstabjhspl1j7qcp002gdp0xfkklpsrdht8at8rdplsi06xe5le506zfxobhbe3shkj3ncvmm42q1r53hbrn4whjm04y5lsx9ls9lbqyo26dyvxi43ekndhda7zpreej7ln9ulptu8z92ibcchpqstn5ybg470yzhclco7bxj15t45w9ouxwugn1bli2vwn16kcqygvewaapi1whbq9hlm07x2juzooo16ota7unccf716wfr8skf55k6q8bsuej3v1i3h60l6327326vzfjnmrrxwb5tset5l5b1qia5ht114alkg2o2xlvtk8mjd7dcfgjugaun1heze0t8q72orl0bxmvg1867wydjsg0yipasvxojjl3b6hfwx2gqm2rn27r1gtzwcpd1c6azu5sa3xit6rvtynjpbcl322z6xnyp7rm9d2pcb8xpmuyd7vfl9twpcxroa39zxidgrnnqpz7q2oke810qfz665nmu8bfe9j5qltsr5xp6cdp2jc3a9jc5kktghv18q5eglvipz56gxhtvxymhl55ym223pdh65i6y98rppj7qe2riynce3owqjvdzfvn28nlcfr9rub5g5cla7uocpjg5lqo7j1r5zog39l0r7yr7cajji5cbv3w4nezjwzsn2txz0k3c4s4e7g2h6nl3gqgjlp0mjh3g5zsfwe4j90cv0xfwe164c1w3n4svgko3x9ws3p1gi41xqafoll6mn0gq96d88ytmzdq3a3s9zgfgi8mit9yai64kh8rehseyqhvg95xmehhtesjbnzats9rm1lv69y84at84da2gwbayfyct5jovs1u9jwuvnc4x9h4 == \8\q\1\v\1\o\y\x\g\y\h\1\w\a\b\y\a\e\q\4\l\u\l\9\7\o\u\5\w\0\8\e\g\r\n\i\1\5\5\b\b\6\g\y\4\h\t\7\k\c\g\s\z\c\o\q\p\r\w\u\o\o\v\h\h\1\7\m\f\c\j\k\d\8\m\m\l\3\j\n\r\u\l\8\a\d\s\z\o\n\s\8\v\d\k\u\j\3\s\k\7\3\0\1\t\v\z\f\z\r\v\q\v\p\p\6\r\t\6\0\o\7\x\a\j\n\g\i\v\r\i\m\t\o\y\5\o\o\3\n\t\f\p\y\h\s\b\o\7\c\v\f\8\2\c\u\n\i\i\q\f\b\h\d\2\p\p\z\t\s\z\x\a\d\5\9\n\s\t\a\b\j\h\s\p\l\1\j\7\q\c\p\0\0\2\g\d\p\0\x\f\k\k\l\p\s\r\d\h\t\8\a\t\8\r\d\p\l\s\i\0\6\x\e\5\l\e\5\0\6\z\f\x\o\b\h\b\e\3\s\h\k\j\3\n\c\v\m\m\4\2\q\1\r\5\3\h\b\r\n\4\w\h\j\m\0\4\y\5\l\s\x\9\l\s\9\l\b\q\y\o\2\6\d\y\v\x\i\4\3\e\k\n\d\h\d\a\7\z\p\r\e\e\j\7\l\n\9\u\l\p\t\u\8\z\9\2\i\b\c\c\h\p\q\s\t\n\5\y\b\g\4\7\0\y\z\h\c\l\c\o\7\b\x\j\1\5\t\4\5\w\9\o\u\x\w\u\g\n\1\b\l\i\2\v\w\n\1\6\k\c\q\y\g\v\e\w\a\a\p\i\1\w\h\b\q\9\h\l\m\0\7\x\2\j\u\z\o\o\o\1\6\o\t\a\7\u\n\c\c\f\7\1\6\w\f\r\8\s\k\f\5\5\k\6\q\8\b\s\u\e\j\3\v\1\i\3\h\6\0\l\6\3\2\7\3\2\6\v\z\f\j\n\m\r\r\x\w\b\5\t\s\e\t\5\l\5\b\1\q\i\a\5\h\t\1\1\4\a\l\k\g\2\o\2\x\l\v\t\k\8\m\j\d\7\d\c\f\g\j\u\g\a\u\n\1\h\e\z\e\0\t\8\q\7\2\o\r\l\0\b\x\m\v\g\1\8\6\7\w\y\d\j\s\g\0\y\i\p\a\s\v\x\o\j\j\l\3\b\6\h\f\w\x\2\g\q\m\2\r\n\2\7\r\1\g\t\z\w\c\p\d\1\c\6\a\z\u\5\s\a\3\x\i\t\6\r\v\t\y\n\j\p\b\c\l\3\2\2\z\6\x\n\y\p\7\r\m\9\d\2\p\c\b\8\x\p\m\u\y\d\7\v\f\l\9\t\w\p\c\x\r\o\a\3\9\z\x\i\d\g\r\n\n\q\p\z\7\q\2\o\k\e\8\1\0\q\f\z\6\6\5\n\m\u\8\b\f\e\9\j\5\q\l\t\s\r\5\x\p\6\c\d\p\2\j\c\3\a\9\j\c\5\k\k\t\g\h\v\1\8\q\5\e\g\l\v\i\p\z\5\6\g\x\h\t\v\x\y\m\h\l\5\5\y\m\2\2\3\p\d\h\6\5\i\6\y\9\8\r\p\p\j\7\q\e\2\r\i\y\n\c\e\3\o\w\q\j\v\d\z\f\v\n\2\8\n\l\c\f\r\9\r\u\b\5\g\5\c\l\a\7\u\o\c\p\j\g\5\l\q\o\7\j\1\r\5\z\o\g\3\9\l\0\r\7\y\r\7\c\a\j\j\i\5\c\b\v\3\w\4\n\e\z\j\w\z\s\n\2\t\x\z\0\k\3\c\4\s\4\e\7\g\2\h\6\n\l\3\g\q\g\j\l\p\0\m\j\h\3\g\5\z\s\f\w\e\4\j\9\0\c\v\0\x\f\w\e\1\6\4\c\1\w\3\n\4\s\v\g\k\o\3\x\9\w\s\3\p\1\g\i\4\1\x\q\a\f\o\l\l\6\m\n\0\g\q\9\6\d\8\8\y\t\m\z\d\q\3\a\3\s\9\z\g\f\g\i\8\m\i\t\9\y\a\i\6\4\k\h\8\r\e\h\s\e\y\q\h\v\g\9\5\x\m\e\h\h\t\e\s\j\b\n\z\a\t\s\9\r\m\1\l\v\6\9\y\8\4\a\t\8\4\d\a\2\g\w\b\a\y\f\y\c\t\5\j\o\v\s\1\u\9\j\w\u\v\n\c\4\x\9\h\4 ]] 00:08:15.603 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:15.604 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 8q1v1oyxgyh1wabyaeq4lul97ou5w08egrni155bb6gy4ht7kcgszcoqprwuoovhh17mfcjkd8mml3jnrul8adszons8vdkuj3sk7301tvzfzrvqvpp6rt60o7xajngivrimtoy5oo3ntfpyhsbo7cvf82cuniiqfbhd2ppztszxad59nstabjhspl1j7qcp002gdp0xfkklpsrdht8at8rdplsi06xe5le506zfxobhbe3shkj3ncvmm42q1r53hbrn4whjm04y5lsx9ls9lbqyo26dyvxi43ekndhda7zpreej7ln9ulptu8z92ibcchpqstn5ybg470yzhclco7bxj15t45w9ouxwugn1bli2vwn16kcqygvewaapi1whbq9hlm07x2juzooo16ota7unccf716wfr8skf55k6q8bsuej3v1i3h60l6327326vzfjnmrrxwb5tset5l5b1qia5ht114alkg2o2xlvtk8mjd7dcfgjugaun1heze0t8q72orl0bxmvg1867wydjsg0yipasvxojjl3b6hfwx2gqm2rn27r1gtzwcpd1c6azu5sa3xit6rvtynjpbcl322z6xnyp7rm9d2pcb8xpmuyd7vfl9twpcxroa39zxidgrnnqpz7q2oke810qfz665nmu8bfe9j5qltsr5xp6cdp2jc3a9jc5kktghv18q5eglvipz56gxhtvxymhl55ym223pdh65i6y98rppj7qe2riynce3owqjvdzfvn28nlcfr9rub5g5cla7uocpjg5lqo7j1r5zog39l0r7yr7cajji5cbv3w4nezjwzsn2txz0k3c4s4e7g2h6nl3gqgjlp0mjh3g5zsfwe4j90cv0xfwe164c1w3n4svgko3x9ws3p1gi41xqafoll6mn0gq96d88ytmzdq3a3s9zgfgi8mit9yai64kh8rehseyqhvg95xmehhtesjbnzats9rm1lv69y84at84da2gwbayfyct5jovs1u9jwuvnc4x9h4 == \8\q\1\v\1\o\y\x\g\y\h\1\w\a\b\y\a\e\q\4\l\u\l\9\7\o\u\5\w\0\8\e\g\r\n\i\1\5\5\b\b\6\g\y\4\h\t\7\k\c\g\s\z\c\o\q\p\r\w\u\o\o\v\h\h\1\7\m\f\c\j\k\d\8\m\m\l\3\j\n\r\u\l\8\a\d\s\z\o\n\s\8\v\d\k\u\j\3\s\k\7\3\0\1\t\v\z\f\z\r\v\q\v\p\p\6\r\t\6\0\o\7\x\a\j\n\g\i\v\r\i\m\t\o\y\5\o\o\3\n\t\f\p\y\h\s\b\o\7\c\v\f\8\2\c\u\n\i\i\q\f\b\h\d\2\p\p\z\t\s\z\x\a\d\5\9\n\s\t\a\b\j\h\s\p\l\1\j\7\q\c\p\0\0\2\g\d\p\0\x\f\k\k\l\p\s\r\d\h\t\8\a\t\8\r\d\p\l\s\i\0\6\x\e\5\l\e\5\0\6\z\f\x\o\b\h\b\e\3\s\h\k\j\3\n\c\v\m\m\4\2\q\1\r\5\3\h\b\r\n\4\w\h\j\m\0\4\y\5\l\s\x\9\l\s\9\l\b\q\y\o\2\6\d\y\v\x\i\4\3\e\k\n\d\h\d\a\7\z\p\r\e\e\j\7\l\n\9\u\l\p\t\u\8\z\9\2\i\b\c\c\h\p\q\s\t\n\5\y\b\g\4\7\0\y\z\h\c\l\c\o\7\b\x\j\1\5\t\4\5\w\9\o\u\x\w\u\g\n\1\b\l\i\2\v\w\n\1\6\k\c\q\y\g\v\e\w\a\a\p\i\1\w\h\b\q\9\h\l\m\0\7\x\2\j\u\z\o\o\o\1\6\o\t\a\7\u\n\c\c\f\7\1\6\w\f\r\8\s\k\f\5\5\k\6\q\8\b\s\u\e\j\3\v\1\i\3\h\6\0\l\6\3\2\7\3\2\6\v\z\f\j\n\m\r\r\x\w\b\5\t\s\e\t\5\l\5\b\1\q\i\a\5\h\t\1\1\4\a\l\k\g\2\o\2\x\l\v\t\k\8\m\j\d\7\d\c\f\g\j\u\g\a\u\n\1\h\e\z\e\0\t\8\q\7\2\o\r\l\0\b\x\m\v\g\1\8\6\7\w\y\d\j\s\g\0\y\i\p\a\s\v\x\o\j\j\l\3\b\6\h\f\w\x\2\g\q\m\2\r\n\2\7\r\1\g\t\z\w\c\p\d\1\c\6\a\z\u\5\s\a\3\x\i\t\6\r\v\t\y\n\j\p\b\c\l\3\2\2\z\6\x\n\y\p\7\r\m\9\d\2\p\c\b\8\x\p\m\u\y\d\7\v\f\l\9\t\w\p\c\x\r\o\a\3\9\z\x\i\d\g\r\n\n\q\p\z\7\q\2\o\k\e\8\1\0\q\f\z\6\6\5\n\m\u\8\b\f\e\9\j\5\q\l\t\s\r\5\x\p\6\c\d\p\2\j\c\3\a\9\j\c\5\k\k\t\g\h\v\1\8\q\5\e\g\l\v\i\p\z\5\6\g\x\h\t\v\x\y\m\h\l\5\5\y\m\2\2\3\p\d\h\6\5\i\6\y\9\8\r\p\p\j\7\q\e\2\r\i\y\n\c\e\3\o\w\q\j\v\d\z\f\v\n\2\8\n\l\c\f\r\9\r\u\b\5\g\5\c\l\a\7\u\o\c\p\j\g\5\l\q\o\7\j\1\r\5\z\o\g\3\9\l\0\r\7\y\r\7\c\a\j\j\i\5\c\b\v\3\w\4\n\e\z\j\w\z\s\n\2\t\x\z\0\k\3\c\4\s\4\e\7\g\2\h\6\n\l\3\g\q\g\j\l\p\0\m\j\h\3\g\5\z\s\f\w\e\4\j\9\0\c\v\0\x\f\w\e\1\6\4\c\1\w\3\n\4\s\v\g\k\o\3\x\9\w\s\3\p\1\g\i\4\1\x\q\a\f\o\l\l\6\m\n\0\g\q\9\6\d\8\8\y\t\m\z\d\q\3\a\3\s\9\z\g\f\g\i\8\m\i\t\9\y\a\i\6\4\k\h\8\r\e\h\s\e\y\q\h\v\g\9\5\x\m\e\h\h\t\e\s\j\b\n\z\a\t\s\9\r\m\1\l\v\6\9\y\8\4\a\t\8\4\d\a\2\g\w\b\a\y\f\y\c\t\5\j\o\v\s\1\u\9\j\w\u\v\n\c\4\x\9\h\4 ]] 00:08:15.604 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:15.863 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:15.863 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:15.863 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:15.863 14:46:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.121 { 00:08:16.122 "subsystems": [ 00:08:16.122 { 00:08:16.122 "subsystem": "bdev", 00:08:16.122 "config": [ 00:08:16.122 { 00:08:16.122 "params": { 00:08:16.122 "block_size": 512, 00:08:16.122 "num_blocks": 1048576, 00:08:16.122 "name": "malloc0" 00:08:16.122 }, 00:08:16.122 "method": "bdev_malloc_create" 00:08:16.122 }, 00:08:16.122 { 00:08:16.122 "params": { 00:08:16.122 "filename": "/dev/zram1", 00:08:16.122 "name": "uring0" 00:08:16.122 }, 00:08:16.122 "method": "bdev_uring_create" 00:08:16.122 }, 00:08:16.122 { 00:08:16.122 "method": "bdev_wait_for_examine" 00:08:16.122 } 00:08:16.122 ] 00:08:16.122 } 00:08:16.122 ] 00:08:16.122 } 00:08:16.122 [2024-11-22 14:46:30.554740] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:16.122 [2024-11-22 14:46:30.554877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61544 ] 00:08:16.122 [2024-11-22 14:46:30.699162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.122 [2024-11-22 14:46:30.772043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.381 [2024-11-22 14:46:30.855409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.807  [2024-11-22T14:46:33.408Z] Copying: 159/512 [MB] (159 MBps) [2024-11-22T14:46:34.344Z] Copying: 312/512 [MB] (153 MBps) [2024-11-22T14:46:34.603Z] Copying: 463/512 [MB] (151 MBps) [2024-11-22T14:46:35.171Z] Copying: 512/512 [MB] (average 155 MBps) 00:08:20.506 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:20.506 14:46:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.506 [2024-11-22 14:46:35.108053] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:20.506 [2024-11-22 14:46:35.108189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61611 ] 00:08:20.506 { 00:08:20.506 "subsystems": [ 00:08:20.506 { 00:08:20.506 "subsystem": "bdev", 00:08:20.506 "config": [ 00:08:20.506 { 00:08:20.506 "params": { 00:08:20.506 "block_size": 512, 00:08:20.506 "num_blocks": 1048576, 00:08:20.506 "name": "malloc0" 00:08:20.506 }, 00:08:20.506 "method": "bdev_malloc_create" 00:08:20.506 }, 00:08:20.506 { 00:08:20.506 "params": { 00:08:20.506 "filename": "/dev/zram1", 00:08:20.506 "name": "uring0" 00:08:20.506 }, 00:08:20.506 "method": "bdev_uring_create" 00:08:20.506 }, 00:08:20.506 { 00:08:20.506 "params": { 00:08:20.506 "name": "uring0" 00:08:20.506 }, 00:08:20.506 "method": "bdev_uring_delete" 00:08:20.506 }, 00:08:20.506 { 00:08:20.506 "method": "bdev_wait_for_examine" 00:08:20.506 } 00:08:20.506 ] 00:08:20.506 } 00:08:20.506 ] 00:08:20.506 } 00:08:20.765 [2024-11-22 14:46:35.258902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.765 [2024-11-22 14:46:35.343624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.024 [2024-11-22 14:46:35.429106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.283  [2024-11-22T14:46:36.516Z] Copying: 0/0 [B] (average 0 Bps) 00:08:21.851 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:21.851 14:46:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:21.851 [2024-11-22 14:46:36.451843] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:21.851 [2024-11-22 14:46:36.451957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61640 ] 00:08:21.851 { 00:08:21.851 "subsystems": [ 00:08:21.851 { 00:08:21.851 "subsystem": "bdev", 00:08:21.851 "config": [ 00:08:21.851 { 00:08:21.851 "params": { 00:08:21.851 "block_size": 512, 00:08:21.851 "num_blocks": 1048576, 00:08:21.851 "name": "malloc0" 00:08:21.851 }, 00:08:21.851 "method": "bdev_malloc_create" 00:08:21.851 }, 00:08:21.851 { 00:08:21.851 "params": { 00:08:21.851 "filename": "/dev/zram1", 00:08:21.851 "name": "uring0" 00:08:21.851 }, 00:08:21.851 "method": "bdev_uring_create" 00:08:21.851 }, 00:08:21.851 { 00:08:21.851 "params": { 00:08:21.851 "name": "uring0" 00:08:21.851 }, 00:08:21.851 "method": "bdev_uring_delete" 00:08:21.851 }, 00:08:21.851 { 00:08:21.851 "method": "bdev_wait_for_examine" 00:08:21.851 } 00:08:21.851 ] 00:08:21.851 } 00:08:21.851 ] 00:08:21.851 } 00:08:22.110 [2024-11-22 14:46:36.603827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.110 [2024-11-22 14:46:36.679616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.110 [2024-11-22 14:46:36.756261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.368 [2024-11-22 14:46:37.029032] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:22.368 [2024-11-22 14:46:37.029104] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:22.368 [2024-11-22 14:46:37.029117] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:22.368 [2024-11-22 14:46:37.029128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.936 [2024-11-22 14:46:37.504943] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:22.936 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:23.195 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:23.195 00:08:23.195 real 0m17.911s 00:08:23.195 user 0m11.866s 00:08:23.195 sys 0m14.964s 00:08:23.195 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.195 14:46:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.195 ************************************ 00:08:23.195 END TEST dd_uring_copy 00:08:23.195 ************************************ 00:08:23.454 00:08:23.454 real 0m18.191s 00:08:23.454 user 0m12.028s 00:08:23.454 sys 0m15.090s 00:08:23.454 14:46:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.454 14:46:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 ************************************ 00:08:23.454 END TEST spdk_dd_uring 00:08:23.454 ************************************ 00:08:23.454 14:46:37 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:23.454 14:46:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.454 14:46:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.454 14:46:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:23.454 ************************************ 00:08:23.454 START TEST spdk_dd_sparse 00:08:23.454 ************************************ 00:08:23.454 14:46:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:23.454 * Looking for test storage... 00:08:23.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.454 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.455 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.714 --rc genhtml_branch_coverage=1 00:08:23.714 --rc genhtml_function_coverage=1 00:08:23.714 --rc genhtml_legend=1 00:08:23.714 --rc geninfo_all_blocks=1 00:08:23.714 --rc geninfo_unexecuted_blocks=1 00:08:23.714 00:08:23.714 ' 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.714 --rc genhtml_branch_coverage=1 00:08:23.714 --rc genhtml_function_coverage=1 00:08:23.714 --rc genhtml_legend=1 00:08:23.714 --rc geninfo_all_blocks=1 00:08:23.714 --rc geninfo_unexecuted_blocks=1 00:08:23.714 00:08:23.714 ' 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.714 --rc genhtml_branch_coverage=1 00:08:23.714 --rc genhtml_function_coverage=1 00:08:23.714 --rc genhtml_legend=1 00:08:23.714 --rc geninfo_all_blocks=1 00:08:23.714 --rc geninfo_unexecuted_blocks=1 00:08:23.714 00:08:23.714 ' 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.714 --rc genhtml_branch_coverage=1 00:08:23.714 --rc genhtml_function_coverage=1 00:08:23.714 --rc genhtml_legend=1 00:08:23.714 --rc geninfo_all_blocks=1 00:08:23.714 --rc geninfo_unexecuted_blocks=1 00:08:23.714 00:08:23.714 ' 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.714 14:46:38 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:23.715 1+0 records in 00:08:23.715 1+0 records out 00:08:23.715 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00558251 s, 751 MB/s 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:23.715 1+0 records in 00:08:23.715 1+0 records out 00:08:23.715 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00720333 s, 582 MB/s 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:23.715 1+0 records in 00:08:23.715 1+0 records out 00:08:23.715 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0083346 s, 503 MB/s 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:23.715 ************************************ 00:08:23.715 START TEST dd_sparse_file_to_file 00:08:23.715 ************************************ 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:23.715 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:23.715 { 00:08:23.715 "subsystems": [ 00:08:23.715 { 00:08:23.715 "subsystem": "bdev", 00:08:23.715 "config": [ 00:08:23.715 { 00:08:23.715 "params": { 00:08:23.715 "block_size": 4096, 00:08:23.715 "filename": "dd_sparse_aio_disk", 00:08:23.715 "name": "dd_aio" 00:08:23.715 }, 00:08:23.715 "method": "bdev_aio_create" 00:08:23.715 }, 00:08:23.715 { 00:08:23.715 "params": { 00:08:23.715 "lvs_name": "dd_lvstore", 00:08:23.715 "bdev_name": "dd_aio" 00:08:23.715 }, 00:08:23.715 "method": "bdev_lvol_create_lvstore" 00:08:23.715 }, 00:08:23.715 { 00:08:23.715 "method": "bdev_wait_for_examine" 00:08:23.715 } 00:08:23.715 ] 00:08:23.715 } 00:08:23.715 ] 00:08:23.715 } 00:08:23.715 [2024-11-22 14:46:38.238215] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:23.715 [2024-11-22 14:46:38.238336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61745 ] 00:08:23.975 [2024-11-22 14:46:38.385150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.975 [2024-11-22 14:46:38.463990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.975 [2024-11-22 14:46:38.540598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.234  [2024-11-22T14:46:39.158Z] Copying: 12/36 [MB] (average 857 MBps) 00:08:24.493 00:08:24.493 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:24.493 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:24.493 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:24.493 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:24.494 00:08:24.494 real 0m0.788s 00:08:24.494 user 0m0.476s 00:08:24.494 sys 0m0.475s 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.494 14:46:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:24.494 ************************************ 00:08:24.494 END TEST dd_sparse_file_to_file 00:08:24.494 ************************************ 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:24.494 ************************************ 00:08:24.494 START TEST dd_sparse_file_to_bdev 00:08:24.494 ************************************ 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:24.494 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:24.494 [2024-11-22 14:46:39.074622] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:24.494 [2024-11-22 14:46:39.074732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61782 ] 00:08:24.494 { 00:08:24.494 "subsystems": [ 00:08:24.494 { 00:08:24.494 "subsystem": "bdev", 00:08:24.494 "config": [ 00:08:24.494 { 00:08:24.494 "params": { 00:08:24.494 "block_size": 4096, 00:08:24.494 "filename": "dd_sparse_aio_disk", 00:08:24.494 "name": "dd_aio" 00:08:24.494 }, 00:08:24.494 "method": "bdev_aio_create" 00:08:24.494 }, 00:08:24.494 { 00:08:24.494 "params": { 00:08:24.494 "lvs_name": "dd_lvstore", 00:08:24.494 "lvol_name": "dd_lvol", 00:08:24.494 "size_in_mib": 36, 00:08:24.494 "thin_provision": true 00:08:24.494 }, 00:08:24.494 "method": "bdev_lvol_create" 00:08:24.494 }, 00:08:24.494 { 00:08:24.494 "method": "bdev_wait_for_examine" 00:08:24.494 } 00:08:24.494 ] 00:08:24.494 } 00:08:24.494 ] 00:08:24.494 } 00:08:24.752 [2024-11-22 14:46:39.222348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.752 [2024-11-22 14:46:39.282093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.752 [2024-11-22 14:46:39.359586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.012  [2024-11-22T14:46:39.935Z] Copying: 12/36 [MB] (average 500 MBps) 00:08:25.270 00:08:25.270 00:08:25.270 real 0m0.755s 00:08:25.270 user 0m0.476s 00:08:25.270 sys 0m0.453s 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.270 ************************************ 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.270 END TEST dd_sparse_file_to_bdev 00:08:25.270 ************************************ 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:25.270 ************************************ 00:08:25.270 START TEST dd_sparse_bdev_to_file 00:08:25.270 ************************************ 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:25.270 14:46:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:25.270 [2024-11-22 14:46:39.870040] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:25.270 [2024-11-22 14:46:39.870144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:08:25.270 { 00:08:25.270 "subsystems": [ 00:08:25.270 { 00:08:25.270 "subsystem": "bdev", 00:08:25.270 "config": [ 00:08:25.270 { 00:08:25.270 "params": { 00:08:25.270 "block_size": 4096, 00:08:25.270 "filename": "dd_sparse_aio_disk", 00:08:25.270 "name": "dd_aio" 00:08:25.270 }, 00:08:25.270 "method": "bdev_aio_create" 00:08:25.271 }, 00:08:25.271 { 00:08:25.271 "method": "bdev_wait_for_examine" 00:08:25.271 } 00:08:25.271 ] 00:08:25.271 } 00:08:25.271 ] 00:08:25.271 } 00:08:25.529 [2024-11-22 14:46:40.009846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.529 [2024-11-22 14:46:40.085675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.529 [2024-11-22 14:46:40.170770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.788  [2024-11-22T14:46:40.712Z] Copying: 12/36 [MB] (average 666 MBps) 00:08:26.047 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:26.047 00:08:26.047 real 0m0.789s 00:08:26.047 user 0m0.485s 00:08:26.047 sys 0m0.477s 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.047 ************************************ 00:08:26.047 END TEST dd_sparse_bdev_to_file 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.047 ************************************ 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:26.047 00:08:26.047 real 0m2.734s 00:08:26.047 user 0m1.611s 00:08:26.047 sys 0m1.624s 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.047 14:46:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:26.047 ************************************ 00:08:26.047 END TEST spdk_dd_sparse 00:08:26.047 ************************************ 00:08:26.306 14:46:40 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.306 14:46:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.306 14:46:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.306 14:46:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.306 ************************************ 00:08:26.306 START TEST spdk_dd_negative 00:08:26.306 ************************************ 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.306 * Looking for test storage... 00:08:26.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.306 --rc genhtml_branch_coverage=1 00:08:26.306 --rc genhtml_function_coverage=1 00:08:26.306 --rc genhtml_legend=1 00:08:26.306 --rc geninfo_all_blocks=1 00:08:26.306 --rc geninfo_unexecuted_blocks=1 00:08:26.306 00:08:26.306 ' 00:08:26.306 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.306 --rc genhtml_branch_coverage=1 00:08:26.306 --rc genhtml_function_coverage=1 00:08:26.306 --rc genhtml_legend=1 00:08:26.307 --rc geninfo_all_blocks=1 00:08:26.307 --rc geninfo_unexecuted_blocks=1 00:08:26.307 00:08:26.307 ' 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.307 --rc genhtml_branch_coverage=1 00:08:26.307 --rc genhtml_function_coverage=1 00:08:26.307 --rc genhtml_legend=1 00:08:26.307 --rc geninfo_all_blocks=1 00:08:26.307 --rc geninfo_unexecuted_blocks=1 00:08:26.307 00:08:26.307 ' 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.307 --rc genhtml_branch_coverage=1 00:08:26.307 --rc genhtml_function_coverage=1 00:08:26.307 --rc genhtml_legend=1 00:08:26.307 --rc geninfo_all_blocks=1 00:08:26.307 --rc geninfo_unexecuted_blocks=1 00:08:26.307 00:08:26.307 ' 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.307 ************************************ 00:08:26.307 START TEST dd_invalid_arguments 00:08:26.307 ************************************ 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.307 14:46:40 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:26.566 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:26.566 00:08:26.566 CPU options: 00:08:26.566 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:26.566 (like [0,1,10]) 00:08:26.566 --lcores lcore to CPU mapping list. The list is in the format: 00:08:26.566 [<,lcores[@CPUs]>...] 00:08:26.566 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:26.566 Within the group, '-' is used for range separator, 00:08:26.566 ',' is used for single number separator. 00:08:26.566 '( )' can be omitted for single element group, 00:08:26.566 '@' can be omitted if cpus and lcores have the same value 00:08:26.566 --disable-cpumask-locks Disable CPU core lock files. 00:08:26.566 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:26.566 pollers in the app support interrupt mode) 00:08:26.566 -p, --main-core main (primary) core for DPDK 00:08:26.566 00:08:26.566 Configuration options: 00:08:26.566 -c, --config, --json JSON config file 00:08:26.566 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:26.566 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:26.566 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:26.566 --rpcs-allowed comma-separated list of permitted RPCS 00:08:26.566 --json-ignore-init-errors don't exit on invalid config entry 00:08:26.566 00:08:26.566 Memory options: 00:08:26.566 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:26.566 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:26.566 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:26.566 -R, --huge-unlink unlink huge files after initialization 00:08:26.566 -n, --mem-channels number of memory channels used for DPDK 00:08:26.566 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:26.566 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:26.566 --no-huge run without using hugepages 00:08:26.566 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:26.566 -i, --shm-id shared memory ID (optional) 00:08:26.566 -g, --single-file-segments force creating just one hugetlbfs file 00:08:26.566 00:08:26.566 PCI options: 00:08:26.566 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:26.566 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:26.566 -u, --no-pci disable PCI access 00:08:26.566 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:26.566 00:08:26.566 Log options: 00:08:26.566 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:26.566 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:26.566 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:26.566 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:26.566 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:26.566 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:26.566 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:26.566 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:26.566 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:26.566 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:26.566 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:26.566 --silence-noticelog disable notice level logging to stderr 00:08:26.566 00:08:26.566 Trace options: 00:08:26.566 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:26.566 setting 0 to disable trace (default 32768) 00:08:26.566 Tracepoints vary in size and can use more than one trace entry. 00:08:26.566 -e, --tpoint-group [:] 00:08:26.566 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:26.566 [2024-11-22 14:46:41.047671] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:26.566 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:26.566 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:26.566 bdev_raid, scheduler, all). 00:08:26.566 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:26.566 a tracepoint group. First tpoint inside a group can be enabled by 00:08:26.566 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:26.566 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:26.566 in /include/spdk_internal/trace_defs.h 00:08:26.566 00:08:26.566 Other options: 00:08:26.566 -h, --help show this usage 00:08:26.566 -v, --version print SPDK version 00:08:26.567 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:26.567 --env-context Opaque context for use of the env implementation 00:08:26.567 00:08:26.567 Application specific: 00:08:26.567 [--------- DD Options ---------] 00:08:26.567 --if Input file. Must specify either --if or --ib. 00:08:26.567 --ib Input bdev. Must specifier either --if or --ib 00:08:26.567 --of Output file. Must specify either --of or --ob. 00:08:26.567 --ob Output bdev. Must specify either --of or --ob. 00:08:26.567 --iflag Input file flags. 00:08:26.567 --oflag Output file flags. 00:08:26.567 --bs I/O unit size (default: 4096) 00:08:26.567 --qd Queue depth (default: 2) 00:08:26.567 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:26.567 --skip Skip this many I/O units at start of input. (default: 0) 00:08:26.567 --seek Skip this many I/O units at start of output. (default: 0) 00:08:26.567 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:26.567 --sparse Enable hole skipping in input target 00:08:26.567 Available iflag and oflag values: 00:08:26.567 append - append mode 00:08:26.567 direct - use direct I/O for data 00:08:26.567 directory - fail unless a directory 00:08:26.567 dsync - use synchronized I/O for data 00:08:26.567 noatime - do not update access time 00:08:26.567 noctty - do not assign controlling terminal from file 00:08:26.567 nofollow - do not follow symlinks 00:08:26.567 nonblock - use non-blocking I/O 00:08:26.567 sync - use synchronized I/O for data and metadata 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.567 00:08:26.567 real 0m0.129s 00:08:26.567 user 0m0.089s 00:08:26.567 sys 0m0.038s 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:26.567 ************************************ 00:08:26.567 END TEST dd_invalid_arguments 00:08:26.567 ************************************ 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.567 ************************************ 00:08:26.567 START TEST dd_double_input 00:08:26.567 ************************************ 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:26.567 [2024-11-22 14:46:41.199655] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.567 00:08:26.567 real 0m0.089s 00:08:26.567 user 0m0.052s 00:08:26.567 sys 0m0.035s 00:08:26.567 ************************************ 00:08:26.567 END TEST dd_double_input 00:08:26.567 ************************************ 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.567 14:46:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.826 ************************************ 00:08:26.826 START TEST dd_double_output 00:08:26.826 ************************************ 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:26.826 [2024-11-22 14:46:41.334581] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.826 00:08:26.826 real 0m0.088s 00:08:26.826 user 0m0.059s 00:08:26.826 sys 0m0.027s 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:26.826 ************************************ 00:08:26.826 END TEST dd_double_output 00:08:26.826 ************************************ 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.826 ************************************ 00:08:26.826 START TEST dd_no_input 00:08:26.826 ************************************ 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:26.826 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.827 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:26.827 [2024-11-22 14:46:41.470538] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.086 00:08:27.086 real 0m0.086s 00:08:27.086 user 0m0.052s 00:08:27.086 sys 0m0.033s 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:27.086 ************************************ 00:08:27.086 END TEST dd_no_input 00:08:27.086 ************************************ 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.086 ************************************ 00:08:27.086 START TEST dd_no_output 00:08:27.086 ************************************ 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.086 [2024-11-22 14:46:41.614148] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.086 00:08:27.086 real 0m0.088s 00:08:27.086 user 0m0.050s 00:08:27.086 sys 0m0.037s 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:27.086 ************************************ 00:08:27.086 END TEST dd_no_output 00:08:27.086 ************************************ 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.086 ************************************ 00:08:27.086 START TEST dd_wrong_blocksize 00:08:27.086 ************************************ 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.086 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.345 [2024-11-22 14:46:41.765668] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:27.345 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:27.345 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.345 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.345 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.345 00:08:27.345 real 0m0.094s 00:08:27.345 user 0m0.057s 00:08:27.345 sys 0m0.035s 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:27.346 ************************************ 00:08:27.346 END TEST dd_wrong_blocksize 00:08:27.346 ************************************ 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.346 ************************************ 00:08:27.346 START TEST dd_smaller_blocksize 00:08:27.346 ************************************ 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.346 14:46:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.346 [2024-11-22 14:46:41.909078] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:27.346 [2024-11-22 14:46:41.909221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:08:27.604 [2024-11-22 14:46:42.063736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.604 [2024-11-22 14:46:42.147251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.604 [2024-11-22 14:46:42.232110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.172 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:28.431 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:28.431 [2024-11-22 14:46:42.989810] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:28.431 [2024-11-22 14:46:42.989899] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.690 [2024-11-22 14:46:43.183312] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.690 00:08:28.690 real 0m1.432s 00:08:28.690 user 0m0.523s 00:08:28.690 sys 0m0.798s 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:28.690 ************************************ 00:08:28.690 END TEST dd_smaller_blocksize 00:08:28.690 ************************************ 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.690 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.691 ************************************ 00:08:28.691 START TEST dd_invalid_count 00:08:28.691 ************************************ 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.691 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.951 [2024-11-22 14:46:43.383255] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.951 00:08:28.951 real 0m0.075s 00:08:28.951 user 0m0.041s 00:08:28.951 sys 0m0.033s 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:28.951 ************************************ 00:08:28.951 END TEST dd_invalid_count 00:08:28.951 ************************************ 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.951 ************************************ 00:08:28.951 START TEST dd_invalid_oflag 00:08:28.951 ************************************ 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.951 [2024-11-22 14:46:43.513343] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.951 00:08:28.951 real 0m0.076s 00:08:28.951 user 0m0.050s 00:08:28.951 sys 0m0.024s 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:28.951 ************************************ 00:08:28.951 END TEST dd_invalid_oflag 00:08:28.951 ************************************ 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.951 ************************************ 00:08:28.951 START TEST dd_invalid_iflag 00:08:28.951 ************************************ 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.951 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:29.211 [2024-11-22 14:46:43.646790] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.211 00:08:29.211 real 0m0.079s 00:08:29.211 user 0m0.047s 00:08:29.211 sys 0m0.031s 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:29.211 ************************************ 00:08:29.211 END TEST dd_invalid_iflag 00:08:29.211 ************************************ 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.211 ************************************ 00:08:29.211 START TEST dd_unknown_flag 00:08:29.211 ************************************ 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.211 14:46:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:29.211 [2024-11-22 14:46:43.776305] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:29.211 [2024-11-22 14:46:43.776435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62155 ] 00:08:29.470 [2024-11-22 14:46:43.926984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.470 [2024-11-22 14:46:44.000578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.470 [2024-11-22 14:46:44.085502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.729 [2024-11-22 14:46:44.140256] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:29.729 [2024-11-22 14:46:44.140403] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.729 [2024-11-22 14:46:44.140484] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:29.729 [2024-11-22 14:46:44.140500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.729 [2024-11-22 14:46:44.140843] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:29.729 [2024-11-22 14:46:44.140869] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.729 [2024-11-22 14:46:44.140938] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:29.729 [2024-11-22 14:46:44.140950] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:29.729 [2024-11-22 14:46:44.325097] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.988 00:08:29.988 real 0m0.702s 00:08:29.988 user 0m0.390s 00:08:29.988 sys 0m0.217s 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:29.988 ************************************ 00:08:29.988 END TEST dd_unknown_flag 00:08:29.988 ************************************ 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.988 ************************************ 00:08:29.988 START TEST dd_invalid_json 00:08:29.988 ************************************ 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.988 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.988 [2024-11-22 14:46:44.531828] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:29.988 [2024-11-22 14:46:44.531952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:08:30.248 [2024-11-22 14:46:44.684711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.248 [2024-11-22 14:46:44.760332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.248 [2024-11-22 14:46:44.760483] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:30.248 [2024-11-22 14:46:44.760510] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:30.248 [2024-11-22 14:46:44.760522] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.248 [2024-11-22 14:46:44.760583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.248 00:08:30.248 real 0m0.384s 00:08:30.248 user 0m0.205s 00:08:30.248 sys 0m0.077s 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 ************************************ 00:08:30.248 END TEST dd_invalid_json 00:08:30.248 ************************************ 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 ************************************ 00:08:30.248 START TEST dd_invalid_seek 00:08:30.248 ************************************ 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.248 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.508 14:46:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:30.508 [2024-11-22 14:46:44.958840] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:30.508 [2024-11-22 14:46:44.959071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:08:30.508 { 00:08:30.508 "subsystems": [ 00:08:30.508 { 00:08:30.508 "subsystem": "bdev", 00:08:30.508 "config": [ 00:08:30.508 { 00:08:30.508 "params": { 00:08:30.508 "block_size": 512, 00:08:30.508 "num_blocks": 512, 00:08:30.508 "name": "malloc0" 00:08:30.508 }, 00:08:30.508 "method": "bdev_malloc_create" 00:08:30.508 }, 00:08:30.508 { 00:08:30.508 "params": { 00:08:30.508 "block_size": 512, 00:08:30.508 "num_blocks": 512, 00:08:30.508 "name": "malloc1" 00:08:30.508 }, 00:08:30.508 "method": "bdev_malloc_create" 00:08:30.508 }, 00:08:30.508 { 00:08:30.508 "method": "bdev_wait_for_examine" 00:08:30.508 } 00:08:30.508 ] 00:08:30.508 } 00:08:30.508 ] 00:08:30.508 } 00:08:30.508 [2024-11-22 14:46:45.108077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.768 [2024-11-22 14:46:45.176283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.768 [2024-11-22 14:46:45.258993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.768 [2024-11-22 14:46:45.339843] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:30.768 [2024-11-22 14:46:45.340189] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.028 [2024-11-22 14:46:45.522093] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.028 00:08:31.028 real 0m0.717s 00:08:31.028 user 0m0.476s 00:08:31.028 sys 0m0.201s 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 ************************************ 00:08:31.028 END TEST dd_invalid_seek 00:08:31.028 ************************************ 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 ************************************ 00:08:31.028 START TEST dd_invalid_skip 00:08:31.028 ************************************ 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.028 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.288 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.288 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.288 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.288 14:46:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:31.288 { 00:08:31.288 "subsystems": [ 00:08:31.288 { 00:08:31.288 "subsystem": "bdev", 00:08:31.288 "config": [ 00:08:31.288 { 00:08:31.288 "params": { 00:08:31.288 "block_size": 512, 00:08:31.288 "num_blocks": 512, 00:08:31.288 "name": "malloc0" 00:08:31.288 }, 00:08:31.288 "method": "bdev_malloc_create" 00:08:31.288 }, 00:08:31.288 { 00:08:31.288 "params": { 00:08:31.288 "block_size": 512, 00:08:31.288 "num_blocks": 512, 00:08:31.288 "name": "malloc1" 00:08:31.288 }, 00:08:31.288 "method": "bdev_malloc_create" 00:08:31.288 }, 00:08:31.288 { 00:08:31.288 "method": "bdev_wait_for_examine" 00:08:31.288 } 00:08:31.288 ] 00:08:31.288 } 00:08:31.288 ] 00:08:31.288 } 00:08:31.288 [2024-11-22 14:46:45.751772] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:31.288 [2024-11-22 14:46:45.751891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62252 ] 00:08:31.288 [2024-11-22 14:46:45.898176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.548 [2024-11-22 14:46:45.961402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.548 [2024-11-22 14:46:46.044175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.548 [2024-11-22 14:46:46.120621] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:31.548 [2024-11-22 14:46:46.120689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.807 [2024-11-22 14:46:46.303555] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.807 00:08:31.807 real 0m0.713s 00:08:31.807 user 0m0.451s 00:08:31.807 sys 0m0.216s 00:08:31.807 ************************************ 00:08:31.807 END TEST dd_invalid_skip 00:08:31.807 ************************************ 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.807 ************************************ 00:08:31.807 START TEST dd_invalid_input_count 00:08:31.807 ************************************ 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.807 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.808 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.808 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.808 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.808 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.808 14:46:46 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:32.067 { 00:08:32.067 "subsystems": [ 00:08:32.067 { 00:08:32.067 "subsystem": "bdev", 00:08:32.067 "config": [ 00:08:32.067 { 00:08:32.067 "params": { 00:08:32.067 "block_size": 512, 00:08:32.067 "num_blocks": 512, 00:08:32.067 "name": "malloc0" 00:08:32.067 }, 00:08:32.067 "method": "bdev_malloc_create" 00:08:32.067 }, 00:08:32.067 { 00:08:32.067 "params": { 00:08:32.067 "block_size": 512, 00:08:32.067 "num_blocks": 512, 00:08:32.067 "name": "malloc1" 00:08:32.067 }, 00:08:32.067 "method": "bdev_malloc_create" 00:08:32.067 }, 00:08:32.067 { 00:08:32.067 "method": "bdev_wait_for_examine" 00:08:32.067 } 00:08:32.067 ] 00:08:32.067 } 00:08:32.067 ] 00:08:32.067 } 00:08:32.067 [2024-11-22 14:46:46.512472] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:32.067 [2024-11-22 14:46:46.512588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62286 ] 00:08:32.067 [2024-11-22 14:46:46.656697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.067 [2024-11-22 14:46:46.727898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.326 [2024-11-22 14:46:46.809193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.326 [2024-11-22 14:46:46.887787] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:32.326 [2024-11-22 14:46:46.888182] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.601 [2024-11-22 14:46:47.070559] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.601 ************************************ 00:08:32.601 END TEST dd_invalid_input_count 00:08:32.601 ************************************ 00:08:32.601 00:08:32.601 real 0m0.707s 00:08:32.601 user 0m0.448s 00:08:32.601 sys 0m0.214s 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 ************************************ 00:08:32.601 START TEST dd_invalid_output_count 00:08:32.601 ************************************ 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.601 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:32.868 { 00:08:32.868 "subsystems": [ 00:08:32.868 { 00:08:32.868 "subsystem": "bdev", 00:08:32.868 "config": [ 00:08:32.868 { 00:08:32.868 "params": { 00:08:32.868 "block_size": 512, 00:08:32.868 "num_blocks": 512, 00:08:32.868 "name": "malloc0" 00:08:32.868 }, 00:08:32.868 "method": "bdev_malloc_create" 00:08:32.868 }, 00:08:32.868 { 00:08:32.868 "method": "bdev_wait_for_examine" 00:08:32.868 } 00:08:32.868 ] 00:08:32.869 } 00:08:32.869 ] 00:08:32.869 } 00:08:32.869 [2024-11-22 14:46:47.284697] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:32.869 [2024-11-22 14:46:47.284834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:08:32.869 [2024-11-22 14:46:47.433054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.869 [2024-11-22 14:46:47.493500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.127 [2024-11-22 14:46:47.568737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.127 [2024-11-22 14:46:47.639264] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:33.127 [2024-11-22 14:46:47.639365] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.386 [2024-11-22 14:46:47.817032] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:33.386 ************************************ 00:08:33.386 END TEST dd_invalid_output_count 00:08:33.386 ************************************ 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.386 00:08:33.386 real 0m0.703s 00:08:33.386 user 0m0.449s 00:08:33.386 sys 0m0.206s 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.386 ************************************ 00:08:33.386 START TEST dd_bs_not_multiple 00:08:33.386 ************************************ 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.386 14:46:47 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:33.386 { 00:08:33.386 "subsystems": [ 00:08:33.386 { 00:08:33.386 "subsystem": "bdev", 00:08:33.386 "config": [ 00:08:33.386 { 00:08:33.386 "params": { 00:08:33.386 "block_size": 512, 00:08:33.386 "num_blocks": 512, 00:08:33.386 "name": "malloc0" 00:08:33.386 }, 00:08:33.386 "method": "bdev_malloc_create" 00:08:33.386 }, 00:08:33.386 { 00:08:33.386 "params": { 00:08:33.386 "block_size": 512, 00:08:33.386 "num_blocks": 512, 00:08:33.386 "name": "malloc1" 00:08:33.386 }, 00:08:33.386 "method": "bdev_malloc_create" 00:08:33.386 }, 00:08:33.386 { 00:08:33.386 "method": "bdev_wait_for_examine" 00:08:33.386 } 00:08:33.386 ] 00:08:33.386 } 00:08:33.386 ] 00:08:33.386 } 00:08:33.645 [2024-11-22 14:46:48.049274] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:33.645 [2024-11-22 14:46:48.049649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62356 ] 00:08:33.645 [2024-11-22 14:46:48.196388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.645 [2024-11-22 14:46:48.267667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.903 [2024-11-22 14:46:48.341850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.903 [2024-11-22 14:46:48.419184] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:33.903 [2024-11-22 14:46:48.419277] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.161 [2024-11-22 14:46:48.589093] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:34.161 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.161 ************************************ 00:08:34.161 END TEST dd_bs_not_multiple 00:08:34.162 ************************************ 00:08:34.162 00:08:34.162 real 0m0.713s 00:08:34.162 user 0m0.451s 00:08:34.162 sys 0m0.218s 00:08:34.162 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.162 14:46:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:34.162 00:08:34.162 real 0m8.000s 00:08:34.162 user 0m4.295s 00:08:34.162 sys 0m3.076s 00:08:34.162 14:46:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.162 14:46:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.162 ************************************ 00:08:34.162 END TEST spdk_dd_negative 00:08:34.162 ************************************ 00:08:34.162 ************************************ 00:08:34.162 END TEST spdk_dd 00:08:34.162 ************************************ 00:08:34.162 00:08:34.162 real 1m33.168s 00:08:34.162 user 0m59.343s 00:08:34.162 sys 0m43.806s 00:08:34.162 14:46:48 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.162 14:46:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:34.162 14:46:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:34.162 14:46:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:34.162 14:46:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:34.162 14:46:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.162 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:08:34.421 14:46:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:34.421 14:46:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:34.421 14:46:48 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:34.421 14:46:48 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:34.421 14:46:48 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:34.421 14:46:48 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:34.421 14:46:48 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:34.421 14:46:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.421 14:46:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.421 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:08:34.421 ************************************ 00:08:34.421 START TEST nvmf_tcp 00:08:34.421 ************************************ 00:08:34.421 14:46:48 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:34.421 * Looking for test storage... 00:08:34.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:34.421 14:46:48 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.421 14:46:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.421 14:46:48 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.421 14:46:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.421 --rc genhtml_branch_coverage=1 00:08:34.421 --rc genhtml_function_coverage=1 00:08:34.421 --rc genhtml_legend=1 00:08:34.421 --rc geninfo_all_blocks=1 00:08:34.421 --rc geninfo_unexecuted_blocks=1 00:08:34.421 00:08:34.421 ' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.421 --rc genhtml_branch_coverage=1 00:08:34.421 --rc genhtml_function_coverage=1 00:08:34.421 --rc genhtml_legend=1 00:08:34.421 --rc geninfo_all_blocks=1 00:08:34.421 --rc geninfo_unexecuted_blocks=1 00:08:34.421 00:08:34.421 ' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.421 --rc genhtml_branch_coverage=1 00:08:34.421 --rc genhtml_function_coverage=1 00:08:34.421 --rc genhtml_legend=1 00:08:34.421 --rc geninfo_all_blocks=1 00:08:34.421 --rc geninfo_unexecuted_blocks=1 00:08:34.421 00:08:34.421 ' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.421 --rc genhtml_branch_coverage=1 00:08:34.421 --rc genhtml_function_coverage=1 00:08:34.421 --rc genhtml_legend=1 00:08:34.421 --rc geninfo_all_blocks=1 00:08:34.421 --rc geninfo_unexecuted_blocks=1 00:08:34.421 00:08:34.421 ' 00:08:34.421 14:46:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:34.421 14:46:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:34.421 14:46:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.421 14:46:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.421 ************************************ 00:08:34.421 START TEST nvmf_target_core 00:08:34.421 ************************************ 00:08:34.421 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:34.681 * Looking for test storage... 00:08:34.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:34.681 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:34.682 14:46:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:34.682 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.682 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.682 14:46:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.682 ************************************ 00:08:34.682 START TEST nvmf_host_management 00:08:34.682 ************************************ 00:08:34.682 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:34.940 * Looking for test storage... 00:08:34.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.940 --rc genhtml_branch_coverage=1 00:08:34.940 --rc genhtml_function_coverage=1 00:08:34.940 --rc genhtml_legend=1 00:08:34.940 --rc geninfo_all_blocks=1 00:08:34.940 --rc geninfo_unexecuted_blocks=1 00:08:34.940 00:08:34.940 ' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.940 --rc genhtml_branch_coverage=1 00:08:34.940 --rc genhtml_function_coverage=1 00:08:34.940 --rc genhtml_legend=1 00:08:34.940 --rc geninfo_all_blocks=1 00:08:34.940 --rc geninfo_unexecuted_blocks=1 00:08:34.940 00:08:34.940 ' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.940 --rc genhtml_branch_coverage=1 00:08:34.940 --rc genhtml_function_coverage=1 00:08:34.940 --rc genhtml_legend=1 00:08:34.940 --rc geninfo_all_blocks=1 00:08:34.940 --rc geninfo_unexecuted_blocks=1 00:08:34.940 00:08:34.940 ' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.940 --rc genhtml_branch_coverage=1 00:08:34.940 --rc genhtml_function_coverage=1 00:08:34.940 --rc genhtml_legend=1 00:08:34.940 --rc geninfo_all_blocks=1 00:08:34.940 --rc geninfo_unexecuted_blocks=1 00:08:34.940 00:08:34.940 ' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.940 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.941 Cannot find device "nvmf_init_br" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.941 Cannot find device "nvmf_init_br2" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.941 Cannot find device "nvmf_tgt_br" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.941 Cannot find device "nvmf_tgt_br2" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.941 Cannot find device "nvmf_init_br" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.941 Cannot find device "nvmf_init_br2" 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:34.941 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:35.199 Cannot find device "nvmf_tgt_br" 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:35.199 Cannot find device "nvmf_tgt_br2" 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:35.199 Cannot find device "nvmf_br" 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:35.199 Cannot find device "nvmf_init_if" 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:35.199 Cannot find device "nvmf_init_if2" 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.199 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:35.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:08:35.459 00:08:35.459 --- 10.0.0.3 ping statistics --- 00:08:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.459 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:35.459 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:35.459 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:08:35.459 00:08:35.459 --- 10.0.0.4 ping statistics --- 00:08:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.459 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:35.459 00:08:35.459 --- 10.0.0.1 ping statistics --- 00:08:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.459 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:35.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:35.459 00:08:35.459 --- 10.0.0.2 ping statistics --- 00:08:35.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.459 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.459 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62708 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62708 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:35.459 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62708 ']' 00:08:35.460 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.460 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.460 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.460 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.460 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:35.460 [2024-11-22 14:46:50.089820] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:35.460 [2024-11-22 14:46:50.089919] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.719 [2024-11-22 14:46:50.245987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.719 [2024-11-22 14:46:50.335601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.719 [2024-11-22 14:46:50.335674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.719 [2024-11-22 14:46:50.335688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.719 [2024-11-22 14:46:50.335700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.719 [2024-11-22 14:46:50.335710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.719 [2024-11-22 14:46:50.337464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.719 [2024-11-22 14:46:50.337709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.719 [2024-11-22 14:46:50.337542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.719 [2024-11-22 14:46:50.337701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:35.977 [2024-11-22 14:46:50.421964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.545 [2024-11-22 14:46:51.193497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.545 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.804 Malloc0 00:08:36.804 [2024-11-22 14:46:51.282698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62762 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62762 /var/tmp/bdevperf.sock 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62762 ']' 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.804 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.804 { 00:08:36.804 "params": { 00:08:36.804 "name": "Nvme$subsystem", 00:08:36.804 "trtype": "$TEST_TRANSPORT", 00:08:36.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.804 "adrfam": "ipv4", 00:08:36.804 "trsvcid": "$NVMF_PORT", 00:08:36.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.804 "hdgst": ${hdgst:-false}, 00:08:36.805 "ddgst": ${ddgst:-false} 00:08:36.805 }, 00:08:36.805 "method": "bdev_nvme_attach_controller" 00:08:36.805 } 00:08:36.805 EOF 00:08:36.805 )") 00:08:36.805 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:36.805 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:36.805 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:36.805 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.805 "params": { 00:08:36.805 "name": "Nvme0", 00:08:36.805 "trtype": "tcp", 00:08:36.805 "traddr": "10.0.0.3", 00:08:36.805 "adrfam": "ipv4", 00:08:36.805 "trsvcid": "4420", 00:08:36.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:36.805 "hdgst": false, 00:08:36.805 "ddgst": false 00:08:36.805 }, 00:08:36.805 "method": "bdev_nvme_attach_controller" 00:08:36.805 }' 00:08:36.805 [2024-11-22 14:46:51.397329] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:36.805 [2024-11-22 14:46:51.397476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:08:37.063 [2024-11-22 14:46:51.549613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.063 [2024-11-22 14:46:51.635396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.063 [2024-11-22 14:46:51.723961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.322 Running I/O for 10 seconds... 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.890 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.150 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.150 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:38.150 [2024-11-22 14:46:52.563226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.150 [2024-11-22 14:46:52.563476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.150 [2024-11-22 14:46:52.563513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.150 [2024-11-22 14:46:52.563527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.563985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.151 [2024-11-22 14:46:52.564456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.151 [2024-11-22 14:46:52.564468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.564972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.564983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.152 [2024-11-22 14:46:52.565001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.565013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f72d0 is same with the state(6) to be set 00:08:38.152 [2024-11-22 14:46:52.565247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.152 [2024-11-22 14:46:52.565266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.565278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.152 [2024-11-22 14:46:52.565293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.565304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.152 [2024-11-22 14:46:52.565313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.565324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.152 [2024-11-22 14:46:52.565333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.152 [2024-11-22 14:46:52.565342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcce0 is same with the state(6) to be set 00:08:38.152 [2024-11-22 14:46:52.566522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controtask offset: 0 on job bdev=Nvme0n1 fails 00:08:38.152 00:08:38.152 Latency(us) 00:08:38.152 [2024-11-22T14:46:52.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.152 Job: Nvme0n1 ended in about 0.71 seconds with error 00:08:38.152 Verification LBA range: start 0x0 length 0x400 00:08:38.152 Nvme0n1 : 0.71 1447.97 90.50 90.50 0.00 40573.03 2591.65 41466.41 00:08:38.152 [2024-11-22T14:46:52.817Z] =================================================================================================================== 00:08:38.152 [2024-11-22T14:46:52.817Z] Total : 1447.97 90.50 90.50 0.00 40573.03 2591.65 41466.41 00:08:38.152 ller 00:08:38.152 [2024-11-22 14:46:52.568642] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.152 [2024-11-22 14:46:52.568672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fcce0 (9): Bad file descriptor 00:08:38.152 [2024-11-22 14:46:52.575358] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62762 00:08:39.088 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62762) - No such process 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.088 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.088 { 00:08:39.088 "params": { 00:08:39.088 "name": "Nvme$subsystem", 00:08:39.088 "trtype": "$TEST_TRANSPORT", 00:08:39.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.088 "adrfam": "ipv4", 00:08:39.088 "trsvcid": "$NVMF_PORT", 00:08:39.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.089 "hdgst": ${hdgst:-false}, 00:08:39.089 "ddgst": ${ddgst:-false} 00:08:39.089 }, 00:08:39.089 "method": "bdev_nvme_attach_controller" 00:08:39.089 } 00:08:39.089 EOF 00:08:39.089 )") 00:08:39.089 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.089 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.089 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.089 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.089 "params": { 00:08:39.089 "name": "Nvme0", 00:08:39.089 "trtype": "tcp", 00:08:39.089 "traddr": "10.0.0.3", 00:08:39.089 "adrfam": "ipv4", 00:08:39.089 "trsvcid": "4420", 00:08:39.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.089 "hdgst": false, 00:08:39.089 "ddgst": false 00:08:39.089 }, 00:08:39.089 "method": "bdev_nvme_attach_controller" 00:08:39.089 }' 00:08:39.089 [2024-11-22 14:46:53.629581] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:39.089 [2024-11-22 14:46:53.629680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62806 ] 00:08:39.377 [2024-11-22 14:46:53.781933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.377 [2024-11-22 14:46:53.841142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.377 [2024-11-22 14:46:53.930247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.678 Running I/O for 1 seconds... 00:08:40.614 1472.00 IOPS, 92.00 MiB/s 00:08:40.614 Latency(us) 00:08:40.614 [2024-11-22T14:46:55.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.614 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.614 Verification LBA range: start 0x0 length 0x400 00:08:40.615 Nvme0n1 : 1.03 1497.68 93.61 0.00 0.00 41950.42 5004.57 37653.41 00:08:40.615 [2024-11-22T14:46:55.280Z] =================================================================================================================== 00:08:40.615 [2024-11-22T14:46:55.280Z] Total : 1497.68 93.61 0.00 0.00 41950.42 5004.57 37653.41 00:08:40.873 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:40.873 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.874 rmmod nvme_tcp 00:08:40.874 rmmod nvme_fabrics 00:08:40.874 rmmod nvme_keyring 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62708 ']' 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62708 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62708 ']' 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62708 00:08:40.874 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62708 00:08:41.133 killing process with pid 62708 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62708' 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62708 00:08:41.133 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62708 00:08:41.393 [2024-11-22 14:46:55.861732] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:41.393 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:41.393 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:41.393 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:41.393 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:41.652 00:08:41.652 real 0m6.884s 00:08:41.652 user 0m25.288s 00:08:41.652 sys 0m1.843s 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.652 ************************************ 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.652 END TEST nvmf_host_management 00:08:41.652 ************************************ 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.652 ************************************ 00:08:41.652 START TEST nvmf_lvol 00:08:41.652 ************************************ 00:08:41.652 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.913 * Looking for test storage... 00:08:41.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.913 --rc genhtml_branch_coverage=1 00:08:41.913 --rc genhtml_function_coverage=1 00:08:41.913 --rc genhtml_legend=1 00:08:41.913 --rc geninfo_all_blocks=1 00:08:41.913 --rc geninfo_unexecuted_blocks=1 00:08:41.913 00:08:41.913 ' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.913 --rc genhtml_branch_coverage=1 00:08:41.913 --rc genhtml_function_coverage=1 00:08:41.913 --rc genhtml_legend=1 00:08:41.913 --rc geninfo_all_blocks=1 00:08:41.913 --rc geninfo_unexecuted_blocks=1 00:08:41.913 00:08:41.913 ' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.913 --rc genhtml_branch_coverage=1 00:08:41.913 --rc genhtml_function_coverage=1 00:08:41.913 --rc genhtml_legend=1 00:08:41.913 --rc geninfo_all_blocks=1 00:08:41.913 --rc geninfo_unexecuted_blocks=1 00:08:41.913 00:08:41.913 ' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.913 --rc genhtml_branch_coverage=1 00:08:41.913 --rc genhtml_function_coverage=1 00:08:41.913 --rc genhtml_legend=1 00:08:41.913 --rc geninfo_all_blocks=1 00:08:41.913 --rc geninfo_unexecuted_blocks=1 00:08:41.913 00:08:41.913 ' 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.913 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:41.914 Cannot find device "nvmf_init_br" 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:41.914 Cannot find device "nvmf_init_br2" 00:08:41.914 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:41.915 Cannot find device "nvmf_tgt_br" 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.915 Cannot find device "nvmf_tgt_br2" 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:41.915 Cannot find device "nvmf_init_br" 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:41.915 Cannot find device "nvmf_init_br2" 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:41.915 Cannot find device "nvmf_tgt_br" 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:41.915 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:42.174 Cannot find device "nvmf_tgt_br2" 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:42.174 Cannot find device "nvmf_br" 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:42.174 Cannot find device "nvmf_init_if" 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:42.174 Cannot find device "nvmf_init_if2" 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.174 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:42.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:08:42.433 00:08:42.433 --- 10.0.0.3 ping statistics --- 00:08:42.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.433 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:42.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:42.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:08:42.433 00:08:42.433 --- 10.0.0.4 ping statistics --- 00:08:42.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.433 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:08:42.433 00:08:42.433 --- 10.0.0.1 ping statistics --- 00:08:42.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.433 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:42.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:42.433 00:08:42.433 --- 10.0.0.2 ping statistics --- 00:08:42.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.433 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63075 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63075 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63075 ']' 00:08:42.433 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.434 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.434 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.434 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.434 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 [2024-11-22 14:46:56.995275] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:42.434 [2024-11-22 14:46:56.995408] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.693 [2024-11-22 14:46:57.155486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.693 [2024-11-22 14:46:57.251510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.693 [2024-11-22 14:46:57.251626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.693 [2024-11-22 14:46:57.251641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.693 [2024-11-22 14:46:57.251651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.693 [2024-11-22 14:46:57.251661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.693 [2024-11-22 14:46:57.253532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.693 [2024-11-22 14:46:57.253452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.693 [2024-11-22 14:46:57.253524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.693 [2024-11-22 14:46:57.338636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.628 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.886 [2024-11-22 14:46:58.397380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.886 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.144 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:44.144 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.401 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:44.401 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:44.967 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:45.226 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6cd13e24-0cdd-4ea4-b3cb-3e0d3a951593 00:08:45.226 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6cd13e24-0cdd-4ea4-b3cb-3e0d3a951593 lvol 20 00:08:45.484 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=16415fc4-4431-4e75-aa43-888bd3a164e5 00:08:45.484 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.742 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16415fc4-4431-4e75-aa43-888bd3a164e5 00:08:46.002 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:46.002 [2024-11-22 14:47:00.662590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.260 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:46.519 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63156 00:08:46.519 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:46.519 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:47.459 14:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 16415fc4-4431-4e75-aa43-888bd3a164e5 MY_SNAPSHOT 00:08:47.718 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=91baa789-b869-4e71-8fc4-74ec63c6b061 00:08:47.718 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 16415fc4-4431-4e75-aa43-888bd3a164e5 30 00:08:48.286 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 91baa789-b869-4e71-8fc4-74ec63c6b061 MY_CLONE 00:08:48.546 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4511fbf5-906e-4f01-8df9-53dd360956de 00:08:48.546 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4511fbf5-906e-4f01-8df9-53dd360956de 00:08:49.114 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63156 00:08:57.231 Initializing NVMe Controllers 00:08:57.231 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:57.231 Controller IO queue size 128, less than required. 00:08:57.231 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.231 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:57.231 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:57.231 Initialization complete. Launching workers. 00:08:57.231 ======================================================== 00:08:57.231 Latency(us) 00:08:57.231 Device Information : IOPS MiB/s Average min max 00:08:57.231 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9910.10 38.71 12926.54 1584.15 75124.02 00:08:57.231 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9889.10 38.63 12955.36 3087.40 65751.10 00:08:57.231 ======================================================== 00:08:57.231 Total : 19799.20 77.34 12940.93 1584.15 75124.02 00:08:57.231 00:08:57.231 14:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.231 14:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 16415fc4-4431-4e75-aa43-888bd3a164e5 00:08:57.490 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6cd13e24-0cdd-4ea4-b3cb-3e0d3a951593 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.749 rmmod nvme_tcp 00:08:57.749 rmmod nvme_fabrics 00:08:57.749 rmmod nvme_keyring 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63075 ']' 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63075 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63075 ']' 00:08:57.749 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63075 00:08:57.750 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:57.750 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63075 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.008 killing process with pid 63075 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63075' 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63075 00:08:58.008 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63075 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:58.267 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:58.539 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:58.539 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:58.539 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.539 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:58.539 00:08:58.539 real 0m16.821s 00:08:58.539 user 1m8.255s 00:08:58.539 sys 0m4.193s 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.539 ************************************ 00:08:58.539 END TEST nvmf_lvol 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.539 ************************************ 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.539 ************************************ 00:08:58.539 START TEST nvmf_lvs_grow 00:08:58.539 ************************************ 00:08:58.539 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.539 * Looking for test storage... 00:08:58.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.812 --rc genhtml_branch_coverage=1 00:08:58.812 --rc genhtml_function_coverage=1 00:08:58.812 --rc genhtml_legend=1 00:08:58.812 --rc geninfo_all_blocks=1 00:08:58.812 --rc geninfo_unexecuted_blocks=1 00:08:58.812 00:08:58.812 ' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.812 --rc genhtml_branch_coverage=1 00:08:58.812 --rc genhtml_function_coverage=1 00:08:58.812 --rc genhtml_legend=1 00:08:58.812 --rc geninfo_all_blocks=1 00:08:58.812 --rc geninfo_unexecuted_blocks=1 00:08:58.812 00:08:58.812 ' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.812 --rc genhtml_branch_coverage=1 00:08:58.812 --rc genhtml_function_coverage=1 00:08:58.812 --rc genhtml_legend=1 00:08:58.812 --rc geninfo_all_blocks=1 00:08:58.812 --rc geninfo_unexecuted_blocks=1 00:08:58.812 00:08:58.812 ' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.812 --rc genhtml_branch_coverage=1 00:08:58.812 --rc genhtml_function_coverage=1 00:08:58.812 --rc genhtml_legend=1 00:08:58.812 --rc geninfo_all_blocks=1 00:08:58.812 --rc geninfo_unexecuted_blocks=1 00:08:58.812 00:08:58.812 ' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.812 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:58.813 Cannot find device "nvmf_init_br" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.813 Cannot find device "nvmf_init_br2" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.813 Cannot find device "nvmf_tgt_br" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.813 Cannot find device "nvmf_tgt_br2" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.813 Cannot find device "nvmf_init_br" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.813 Cannot find device "nvmf_init_br2" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.813 Cannot find device "nvmf_tgt_br" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.813 Cannot find device "nvmf_tgt_br2" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:58.813 Cannot find device "nvmf_br" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:58.813 Cannot find device "nvmf_init_if" 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:58.813 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:59.072 Cannot find device "nvmf_init_if2" 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.072 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:59.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:08:59.332 00:08:59.332 --- 10.0.0.3 ping statistics --- 00:08:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.332 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:59.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:59.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:59.332 00:08:59.332 --- 10.0.0.4 ping statistics --- 00:08:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.332 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:59.332 00:08:59.332 --- 10.0.0.1 ping statistics --- 00:08:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.332 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:59.332 00:08:59.332 --- 10.0.0.2 ping statistics --- 00:08:59.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.332 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63533 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63533 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63533 ']' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.332 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.332 [2024-11-22 14:47:13.863942] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:08:59.332 [2024-11-22 14:47:13.864227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.591 [2024-11-22 14:47:14.018535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.591 [2024-11-22 14:47:14.090173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.591 [2024-11-22 14:47:14.090249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.591 [2024-11-22 14:47:14.090277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.591 [2024-11-22 14:47:14.090288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.592 [2024-11-22 14:47:14.090298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.592 [2024-11-22 14:47:14.090842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.592 [2024-11-22 14:47:14.172153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.850 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.110 [2024-11-22 14:47:14.581561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.110 ************************************ 00:09:00.110 START TEST lvs_grow_clean 00:09:00.110 ************************************ 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.110 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.369 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:00.369 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:00.627 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:00.627 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:00.627 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:00.886 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:00.886 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:00.886 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 lvol 150 00:09:01.144 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=69bc894d-b92b-4e63-9d64-7e71d2c1c6cb 00:09:01.144 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.144 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:01.403 [2024-11-22 14:47:16.005572] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:01.403 [2024-11-22 14:47:16.006092] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:01.403 true 00:09:01.403 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:01.403 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:01.971 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:01.971 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.971 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69bc894d-b92b-4e63-9d64-7e71d2c1c6cb 00:09:02.230 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:02.488 [2024-11-22 14:47:17.140507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:02.747 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63618 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63618 /var/tmp/bdevperf.sock 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63618 ']' 00:09:03.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.004 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:03.004 [2024-11-22 14:47:17.463072] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:03.004 [2024-11-22 14:47:17.463153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63618 ] 00:09:03.004 [2024-11-22 14:47:17.612553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.263 [2024-11-22 14:47:17.686047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.263 [2024-11-22 14:47:17.766949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.263 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.263 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:03.263 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:03.521 Nvme0n1 00:09:03.521 14:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.780 [ 00:09:03.780 { 00:09:03.780 "name": "Nvme0n1", 00:09:03.780 "aliases": [ 00:09:03.780 "69bc894d-b92b-4e63-9d64-7e71d2c1c6cb" 00:09:03.780 ], 00:09:03.780 "product_name": "NVMe disk", 00:09:03.780 "block_size": 4096, 00:09:03.780 "num_blocks": 38912, 00:09:03.780 "uuid": "69bc894d-b92b-4e63-9d64-7e71d2c1c6cb", 00:09:03.780 "numa_id": -1, 00:09:03.780 "assigned_rate_limits": { 00:09:03.780 "rw_ios_per_sec": 0, 00:09:03.780 "rw_mbytes_per_sec": 0, 00:09:03.781 "r_mbytes_per_sec": 0, 00:09:03.781 "w_mbytes_per_sec": 0 00:09:03.781 }, 00:09:03.781 "claimed": false, 00:09:03.781 "zoned": false, 00:09:03.781 "supported_io_types": { 00:09:03.781 "read": true, 00:09:03.781 "write": true, 00:09:03.781 "unmap": true, 00:09:03.781 "flush": true, 00:09:03.781 "reset": true, 00:09:03.781 "nvme_admin": true, 00:09:03.781 "nvme_io": true, 00:09:03.781 "nvme_io_md": false, 00:09:03.781 "write_zeroes": true, 00:09:03.781 "zcopy": false, 00:09:03.781 "get_zone_info": false, 00:09:03.781 "zone_management": false, 00:09:03.781 "zone_append": false, 00:09:03.781 "compare": true, 00:09:03.781 "compare_and_write": true, 00:09:03.781 "abort": true, 00:09:03.781 "seek_hole": false, 00:09:03.781 "seek_data": false, 00:09:03.781 "copy": true, 00:09:03.781 "nvme_iov_md": false 00:09:03.781 }, 00:09:03.781 "memory_domains": [ 00:09:03.781 { 00:09:03.781 "dma_device_id": "system", 00:09:03.781 "dma_device_type": 1 00:09:03.781 } 00:09:03.781 ], 00:09:03.781 "driver_specific": { 00:09:03.781 "nvme": [ 00:09:03.781 { 00:09:03.781 "trid": { 00:09:03.781 "trtype": "TCP", 00:09:03.781 "adrfam": "IPv4", 00:09:03.781 "traddr": "10.0.0.3", 00:09:03.781 "trsvcid": "4420", 00:09:03.781 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.781 }, 00:09:03.781 "ctrlr_data": { 00:09:03.781 "cntlid": 1, 00:09:03.781 "vendor_id": "0x8086", 00:09:03.781 "model_number": "SPDK bdev Controller", 00:09:03.781 "serial_number": "SPDK0", 00:09:03.781 "firmware_revision": "25.01", 00:09:03.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.781 "oacs": { 00:09:03.781 "security": 0, 00:09:03.781 "format": 0, 00:09:03.781 "firmware": 0, 00:09:03.781 "ns_manage": 0 00:09:03.781 }, 00:09:03.781 "multi_ctrlr": true, 00:09:03.781 "ana_reporting": false 00:09:03.781 }, 00:09:03.781 "vs": { 00:09:03.781 "nvme_version": "1.3" 00:09:03.781 }, 00:09:03.781 "ns_data": { 00:09:03.781 "id": 1, 00:09:03.781 "can_share": true 00:09:03.781 } 00:09:03.781 } 00:09:03.781 ], 00:09:03.781 "mp_policy": "active_passive" 00:09:03.781 } 00:09:03.781 } 00:09:03.781 ] 00:09:03.781 14:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63630 00:09:03.781 14:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.781 14:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:04.039 Running I/O for 10 seconds... 00:09:04.975 Latency(us) 00:09:04.975 [2024-11-22T14:47:19.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.975 Nvme0n1 : 1.00 6753.00 26.38 0.00 0.00 0.00 0.00 0.00 00:09:04.975 [2024-11-22T14:47:19.640Z] =================================================================================================================== 00:09:04.975 [2024-11-22T14:47:19.640Z] Total : 6753.00 26.38 0.00 0.00 0.00 0.00 0.00 00:09:04.975 00:09:05.912 14:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:05.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.913 Nvme0n1 : 2.00 6678.50 26.09 0.00 0.00 0.00 0.00 0.00 00:09:05.913 [2024-11-22T14:47:20.578Z] =================================================================================================================== 00:09:05.913 [2024-11-22T14:47:20.578Z] Total : 6678.50 26.09 0.00 0.00 0.00 0.00 0.00 00:09:05.913 00:09:06.172 true 00:09:06.172 14:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:06.172 14:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:06.431 14:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:06.431 14:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:06.431 14:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63630 00:09:06.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.999 Nvme0n1 : 3.00 6653.67 25.99 0.00 0.00 0.00 0.00 0.00 00:09:06.999 [2024-11-22T14:47:21.664Z] =================================================================================================================== 00:09:06.999 [2024-11-22T14:47:21.664Z] Total : 6653.67 25.99 0.00 0.00 0.00 0.00 0.00 00:09:06.999 00:09:07.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.948 Nvme0n1 : 4.00 6551.50 25.59 0.00 0.00 0.00 0.00 0.00 00:09:07.948 [2024-11-22T14:47:22.613Z] =================================================================================================================== 00:09:07.948 [2024-11-22T14:47:22.613Z] Total : 6551.50 25.59 0.00 0.00 0.00 0.00 0.00 00:09:07.948 00:09:08.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.910 Nvme0n1 : 5.00 6511.20 25.43 0.00 0.00 0.00 0.00 0.00 00:09:08.910 [2024-11-22T14:47:23.575Z] =================================================================================================================== 00:09:08.910 [2024-11-22T14:47:23.575Z] Total : 6511.20 25.43 0.00 0.00 0.00 0.00 0.00 00:09:08.910 00:09:10.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.289 Nvme0n1 : 6.00 6463.17 25.25 0.00 0.00 0.00 0.00 0.00 00:09:10.289 [2024-11-22T14:47:24.954Z] =================================================================================================================== 00:09:10.289 [2024-11-22T14:47:24.954Z] Total : 6463.17 25.25 0.00 0.00 0.00 0.00 0.00 00:09:10.289 00:09:10.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.857 Nvme0n1 : 7.00 6447.00 25.18 0.00 0.00 0.00 0.00 0.00 00:09:10.857 [2024-11-22T14:47:25.522Z] =================================================================================================================== 00:09:10.857 [2024-11-22T14:47:25.522Z] Total : 6447.00 25.18 0.00 0.00 0.00 0.00 0.00 00:09:10.857 00:09:12.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.235 Nvme0n1 : 8.00 6403.12 25.01 0.00 0.00 0.00 0.00 0.00 00:09:12.235 [2024-11-22T14:47:26.900Z] =================================================================================================================== 00:09:12.235 [2024-11-22T14:47:26.900Z] Total : 6403.12 25.01 0.00 0.00 0.00 0.00 0.00 00:09:12.235 00:09:13.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.172 Nvme0n1 : 9.00 6369.00 24.88 0.00 0.00 0.00 0.00 0.00 00:09:13.172 [2024-11-22T14:47:27.837Z] =================================================================================================================== 00:09:13.172 [2024-11-22T14:47:27.837Z] Total : 6369.00 24.88 0.00 0.00 0.00 0.00 0.00 00:09:13.172 00:09:14.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.106 Nvme0n1 : 10.00 6341.70 24.77 0.00 0.00 0.00 0.00 0.00 00:09:14.106 [2024-11-22T14:47:28.771Z] =================================================================================================================== 00:09:14.106 [2024-11-22T14:47:28.771Z] Total : 6341.70 24.77 0.00 0.00 0.00 0.00 0.00 00:09:14.106 00:09:14.106 00:09:14.106 Latency(us) 00:09:14.106 [2024-11-22T14:47:28.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.106 Nvme0n1 : 10.02 6343.21 24.78 0.00 0.00 20174.08 5928.03 91988.71 00:09:14.106 [2024-11-22T14:47:28.771Z] =================================================================================================================== 00:09:14.106 [2024-11-22T14:47:28.771Z] Total : 6343.21 24.78 0.00 0.00 20174.08 5928.03 91988.71 00:09:14.106 { 00:09:14.106 "results": [ 00:09:14.106 { 00:09:14.106 "job": "Nvme0n1", 00:09:14.106 "core_mask": "0x2", 00:09:14.106 "workload": "randwrite", 00:09:14.106 "status": "finished", 00:09:14.106 "queue_depth": 128, 00:09:14.106 "io_size": 4096, 00:09:14.106 "runtime": 10.017802, 00:09:14.106 "iops": 6343.207821436279, 00:09:14.106 "mibps": 24.778155552485465, 00:09:14.106 "io_failed": 0, 00:09:14.106 "io_timeout": 0, 00:09:14.106 "avg_latency_us": 20174.0825142383, 00:09:14.106 "min_latency_us": 5928.029090909091, 00:09:14.106 "max_latency_us": 91988.71272727272 00:09:14.106 } 00:09:14.106 ], 00:09:14.106 "core_count": 1 00:09:14.106 } 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63618 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63618 ']' 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63618 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63618 00:09:14.106 killing process with pid 63618 00:09:14.106 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.106 00:09:14.106 Latency(us) 00:09:14.106 [2024-11-22T14:47:28.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.106 [2024-11-22T14:47:28.771Z] =================================================================================================================== 00:09:14.106 [2024-11-22T14:47:28.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63618' 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63618 00:09:14.106 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63618 00:09:14.364 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:14.622 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.187 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:15.187 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:15.187 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:15.187 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:15.187 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:15.753 [2024-11-22 14:47:30.121158] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:15.753 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:15.753 request: 00:09:15.753 { 00:09:15.753 "uuid": "0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730", 00:09:15.753 "method": "bdev_lvol_get_lvstores", 00:09:15.753 "req_id": 1 00:09:15.753 } 00:09:15.753 Got JSON-RPC error response 00:09:15.753 response: 00:09:15.753 { 00:09:15.753 "code": -19, 00:09:15.753 "message": "No such device" 00:09:15.753 } 00:09:16.012 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:16.012 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.012 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.012 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.012 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.271 aio_bdev 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 69bc894d-b92b-4e63-9d64-7e71d2c1c6cb 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=69bc894d-b92b-4e63-9d64-7e71d2c1c6cb 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.271 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.530 14:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69bc894d-b92b-4e63-9d64-7e71d2c1c6cb -t 2000 00:09:16.789 [ 00:09:16.789 { 00:09:16.789 "name": "69bc894d-b92b-4e63-9d64-7e71d2c1c6cb", 00:09:16.789 "aliases": [ 00:09:16.789 "lvs/lvol" 00:09:16.789 ], 00:09:16.789 "product_name": "Logical Volume", 00:09:16.789 "block_size": 4096, 00:09:16.789 "num_blocks": 38912, 00:09:16.789 "uuid": "69bc894d-b92b-4e63-9d64-7e71d2c1c6cb", 00:09:16.789 "assigned_rate_limits": { 00:09:16.789 "rw_ios_per_sec": 0, 00:09:16.789 "rw_mbytes_per_sec": 0, 00:09:16.789 "r_mbytes_per_sec": 0, 00:09:16.789 "w_mbytes_per_sec": 0 00:09:16.789 }, 00:09:16.789 "claimed": false, 00:09:16.789 "zoned": false, 00:09:16.789 "supported_io_types": { 00:09:16.789 "read": true, 00:09:16.789 "write": true, 00:09:16.789 "unmap": true, 00:09:16.789 "flush": false, 00:09:16.789 "reset": true, 00:09:16.789 "nvme_admin": false, 00:09:16.789 "nvme_io": false, 00:09:16.789 "nvme_io_md": false, 00:09:16.789 "write_zeroes": true, 00:09:16.789 "zcopy": false, 00:09:16.789 "get_zone_info": false, 00:09:16.789 "zone_management": false, 00:09:16.789 "zone_append": false, 00:09:16.789 "compare": false, 00:09:16.789 "compare_and_write": false, 00:09:16.789 "abort": false, 00:09:16.789 "seek_hole": true, 00:09:16.789 "seek_data": true, 00:09:16.789 "copy": false, 00:09:16.789 "nvme_iov_md": false 00:09:16.789 }, 00:09:16.789 "driver_specific": { 00:09:16.789 "lvol": { 00:09:16.789 "lvol_store_uuid": "0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730", 00:09:16.789 "base_bdev": "aio_bdev", 00:09:16.789 "thin_provision": false, 00:09:16.789 "num_allocated_clusters": 38, 00:09:16.789 "snapshot": false, 00:09:16.789 "clone": false, 00:09:16.789 "esnap_clone": false 00:09:16.789 } 00:09:16.789 } 00:09:16.789 } 00:09:16.789 ] 00:09:16.789 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:16.789 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:16.789 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:17.060 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:17.060 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:17.060 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:17.318 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:17.318 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69bc894d-b92b-4e63-9d64-7e71d2c1c6cb 00:09:17.577 14:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dc38fc5-414a-4a32-a4c2-d1e5a2ef5730 00:09:17.837 14:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:18.095 14:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.663 ************************************ 00:09:18.663 END TEST lvs_grow_clean 00:09:18.663 ************************************ 00:09:18.663 00:09:18.663 real 0m18.433s 00:09:18.663 user 0m17.000s 00:09:18.663 sys 0m2.771s 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.663 ************************************ 00:09:18.663 START TEST lvs_grow_dirty 00:09:18.663 ************************************ 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:18.663 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.664 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.922 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.922 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:19.182 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=71ec08ff-d248-4b41-874d-b35a65adac55 00:09:19.182 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:19.182 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:19.440 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:19.440 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:19.440 14:47:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 71ec08ff-d248-4b41-874d-b35a65adac55 lvol 150 00:09:19.699 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b5b77c08-55d7-4469-8445-351efcf93611 00:09:19.699 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.700 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:19.958 [2024-11-22 14:47:34.580357] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:19.958 [2024-11-22 14:47:34.580494] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.958 true 00:09:19.958 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.958 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:20.527 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:20.527 14:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.527 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5b77c08-55d7-4469-8445-351efcf93611 00:09:21.093 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:21.093 [2024-11-22 14:47:35.724937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:21.093 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:21.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63888 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63888 /var/tmp/bdevperf.sock 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63888 ']' 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.352 14:47:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.611 [2024-11-22 14:47:36.017363] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:21.611 [2024-11-22 14:47:36.017946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63888 ] 00:09:21.611 [2024-11-22 14:47:36.167585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.611 [2024-11-22 14:47:36.236294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.870 [2024-11-22 14:47:36.313997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.870 14:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.870 14:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:21.870 14:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.129 Nvme0n1 00:09:22.129 14:47:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.387 [ 00:09:22.387 { 00:09:22.387 "name": "Nvme0n1", 00:09:22.387 "aliases": [ 00:09:22.387 "b5b77c08-55d7-4469-8445-351efcf93611" 00:09:22.387 ], 00:09:22.387 "product_name": "NVMe disk", 00:09:22.387 "block_size": 4096, 00:09:22.387 "num_blocks": 38912, 00:09:22.388 "uuid": "b5b77c08-55d7-4469-8445-351efcf93611", 00:09:22.388 "numa_id": -1, 00:09:22.388 "assigned_rate_limits": { 00:09:22.388 "rw_ios_per_sec": 0, 00:09:22.388 "rw_mbytes_per_sec": 0, 00:09:22.388 "r_mbytes_per_sec": 0, 00:09:22.388 "w_mbytes_per_sec": 0 00:09:22.388 }, 00:09:22.388 "claimed": false, 00:09:22.388 "zoned": false, 00:09:22.388 "supported_io_types": { 00:09:22.388 "read": true, 00:09:22.388 "write": true, 00:09:22.388 "unmap": true, 00:09:22.388 "flush": true, 00:09:22.388 "reset": true, 00:09:22.388 "nvme_admin": true, 00:09:22.388 "nvme_io": true, 00:09:22.388 "nvme_io_md": false, 00:09:22.388 "write_zeroes": true, 00:09:22.388 "zcopy": false, 00:09:22.388 "get_zone_info": false, 00:09:22.388 "zone_management": false, 00:09:22.388 "zone_append": false, 00:09:22.388 "compare": true, 00:09:22.388 "compare_and_write": true, 00:09:22.388 "abort": true, 00:09:22.388 "seek_hole": false, 00:09:22.388 "seek_data": false, 00:09:22.388 "copy": true, 00:09:22.388 "nvme_iov_md": false 00:09:22.388 }, 00:09:22.388 "memory_domains": [ 00:09:22.388 { 00:09:22.388 "dma_device_id": "system", 00:09:22.388 "dma_device_type": 1 00:09:22.388 } 00:09:22.388 ], 00:09:22.388 "driver_specific": { 00:09:22.388 "nvme": [ 00:09:22.388 { 00:09:22.388 "trid": { 00:09:22.388 "trtype": "TCP", 00:09:22.388 "adrfam": "IPv4", 00:09:22.388 "traddr": "10.0.0.3", 00:09:22.388 "trsvcid": "4420", 00:09:22.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.388 }, 00:09:22.388 "ctrlr_data": { 00:09:22.388 "cntlid": 1, 00:09:22.388 "vendor_id": "0x8086", 00:09:22.388 "model_number": "SPDK bdev Controller", 00:09:22.388 "serial_number": "SPDK0", 00:09:22.388 "firmware_revision": "25.01", 00:09:22.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.388 "oacs": { 00:09:22.388 "security": 0, 00:09:22.388 "format": 0, 00:09:22.388 "firmware": 0, 00:09:22.388 "ns_manage": 0 00:09:22.388 }, 00:09:22.388 "multi_ctrlr": true, 00:09:22.388 "ana_reporting": false 00:09:22.388 }, 00:09:22.388 "vs": { 00:09:22.388 "nvme_version": "1.3" 00:09:22.388 }, 00:09:22.388 "ns_data": { 00:09:22.388 "id": 1, 00:09:22.388 "can_share": true 00:09:22.388 } 00:09:22.388 } 00:09:22.388 ], 00:09:22.388 "mp_policy": "active_passive" 00:09:22.388 } 00:09:22.388 } 00:09:22.388 ] 00:09:22.388 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63904 00:09:22.388 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.388 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:22.647 Running I/O for 10 seconds... 00:09:23.583 Latency(us) 00:09:23.583 [2024-11-22T14:47:38.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.583 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:23.583 [2024-11-22T14:47:38.248Z] =================================================================================================================== 00:09:23.583 [2024-11-22T14:47:38.248Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:23.583 00:09:24.520 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:24.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.520 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:24.520 [2024-11-22T14:47:39.185Z] =================================================================================================================== 00:09:24.520 [2024-11-22T14:47:39.185Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:24.520 00:09:24.778 true 00:09:24.778 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:24.778 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.345 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.345 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.345 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63904 00:09:25.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.604 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:25.604 [2024-11-22T14:47:40.269Z] =================================================================================================================== 00:09:25.604 [2024-11-22T14:47:40.269Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:25.604 00:09:26.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.542 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:26.542 [2024-11-22T14:47:41.207Z] =================================================================================================================== 00:09:26.542 [2024-11-22T14:47:41.207Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:26.542 00:09:27.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.931 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:27.931 [2024-11-22T14:47:42.596Z] =================================================================================================================== 00:09:27.931 [2024-11-22T14:47:42.596Z] Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:27.931 00:09:28.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.513 Nvme0n1 : 6.00 6722.00 26.26 0.00 0.00 0.00 0.00 0.00 00:09:28.513 [2024-11-22T14:47:43.178Z] =================================================================================================================== 00:09:28.513 [2024-11-22T14:47:43.178Z] Total : 6722.00 26.26 0.00 0.00 0.00 0.00 0.00 00:09:28.513 00:09:29.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.888 Nvme0n1 : 7.00 6614.43 25.84 0.00 0.00 0.00 0.00 0.00 00:09:29.888 [2024-11-22T14:47:44.553Z] =================================================================================================================== 00:09:29.888 [2024-11-22T14:47:44.553Z] Total : 6614.43 25.84 0.00 0.00 0.00 0.00 0.00 00:09:29.888 00:09:30.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.822 Nvme0n1 : 8.00 6708.38 26.20 0.00 0.00 0.00 0.00 0.00 00:09:30.822 [2024-11-22T14:47:45.487Z] =================================================================================================================== 00:09:30.822 [2024-11-22T14:47:45.487Z] Total : 6708.38 26.20 0.00 0.00 0.00 0.00 0.00 00:09:30.822 00:09:31.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.755 Nvme0n1 : 9.00 6781.44 26.49 0.00 0.00 0.00 0.00 0.00 00:09:31.755 [2024-11-22T14:47:46.420Z] =================================================================================================================== 00:09:31.755 [2024-11-22T14:47:46.420Z] Total : 6781.44 26.49 0.00 0.00 0.00 0.00 0.00 00:09:31.755 00:09:32.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.692 Nvme0n1 : 10.00 6814.50 26.62 0.00 0.00 0.00 0.00 0.00 00:09:32.692 [2024-11-22T14:47:47.357Z] =================================================================================================================== 00:09:32.692 [2024-11-22T14:47:47.357Z] Total : 6814.50 26.62 0.00 0.00 0.00 0.00 0.00 00:09:32.692 00:09:32.692 00:09:32.692 Latency(us) 00:09:32.692 [2024-11-22T14:47:47.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.692 Nvme0n1 : 10.01 6822.95 26.65 0.00 0.00 18756.49 5064.15 310759.80 00:09:32.692 [2024-11-22T14:47:47.357Z] =================================================================================================================== 00:09:32.692 [2024-11-22T14:47:47.357Z] Total : 6822.95 26.65 0.00 0.00 18756.49 5064.15 310759.80 00:09:32.692 { 00:09:32.692 "results": [ 00:09:32.692 { 00:09:32.692 "job": "Nvme0n1", 00:09:32.692 "core_mask": "0x2", 00:09:32.692 "workload": "randwrite", 00:09:32.692 "status": "finished", 00:09:32.692 "queue_depth": 128, 00:09:32.692 "io_size": 4096, 00:09:32.692 "runtime": 10.006373, 00:09:32.692 "iops": 6822.951732860648, 00:09:32.692 "mibps": 26.652155206486906, 00:09:32.692 "io_failed": 0, 00:09:32.692 "io_timeout": 0, 00:09:32.692 "avg_latency_us": 18756.485787247188, 00:09:32.692 "min_latency_us": 5064.145454545454, 00:09:32.692 "max_latency_us": 310759.7963636364 00:09:32.692 } 00:09:32.692 ], 00:09:32.692 "core_count": 1 00:09:32.692 } 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63888 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63888 ']' 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63888 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63888 00:09:32.692 killing process with pid 63888 00:09:32.692 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.692 00:09:32.692 Latency(us) 00:09:32.692 [2024-11-22T14:47:47.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.692 [2024-11-22T14:47:47.357Z] =================================================================================================================== 00:09:32.692 [2024-11-22T14:47:47.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63888' 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63888 00:09:32.692 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63888 00:09:32.951 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:33.211 14:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.470 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:33.470 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:33.728 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:33.728 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:33.728 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63533 00:09:33.728 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63533 00:09:33.987 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63533 Killed "${NVMF_APP[@]}" "$@" 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64041 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64041 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64041 ']' 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.987 14:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.987 [2024-11-22 14:47:48.479349] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:33.987 [2024-11-22 14:47:48.479447] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.987 [2024-11-22 14:47:48.617948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.246 [2024-11-22 14:47:48.664999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.246 [2024-11-22 14:47:48.665074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.246 [2024-11-22 14:47:48.665085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.246 [2024-11-22 14:47:48.665092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.246 [2024-11-22 14:47:48.665098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.246 [2024-11-22 14:47:48.665524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.246 [2024-11-22 14:47:48.741823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.820 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.820 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:34.820 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.820 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.820 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.082 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.082 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.341 [2024-11-22 14:47:49.790969] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:35.341 [2024-11-22 14:47:49.791668] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:35.341 [2024-11-22 14:47:49.792018] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b5b77c08-55d7-4469-8445-351efcf93611 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b5b77c08-55d7-4469-8445-351efcf93611 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.341 14:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.599 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5b77c08-55d7-4469-8445-351efcf93611 -t 2000 00:09:35.860 [ 00:09:35.860 { 00:09:35.860 "name": "b5b77c08-55d7-4469-8445-351efcf93611", 00:09:35.860 "aliases": [ 00:09:35.860 "lvs/lvol" 00:09:35.860 ], 00:09:35.860 "product_name": "Logical Volume", 00:09:35.860 "block_size": 4096, 00:09:35.860 "num_blocks": 38912, 00:09:35.860 "uuid": "b5b77c08-55d7-4469-8445-351efcf93611", 00:09:35.860 "assigned_rate_limits": { 00:09:35.860 "rw_ios_per_sec": 0, 00:09:35.860 "rw_mbytes_per_sec": 0, 00:09:35.860 "r_mbytes_per_sec": 0, 00:09:35.860 "w_mbytes_per_sec": 0 00:09:35.860 }, 00:09:35.860 "claimed": false, 00:09:35.860 "zoned": false, 00:09:35.860 "supported_io_types": { 00:09:35.860 "read": true, 00:09:35.860 "write": true, 00:09:35.860 "unmap": true, 00:09:35.860 "flush": false, 00:09:35.860 "reset": true, 00:09:35.860 "nvme_admin": false, 00:09:35.860 "nvme_io": false, 00:09:35.860 "nvme_io_md": false, 00:09:35.860 "write_zeroes": true, 00:09:35.860 "zcopy": false, 00:09:35.860 "get_zone_info": false, 00:09:35.860 "zone_management": false, 00:09:35.860 "zone_append": false, 00:09:35.860 "compare": false, 00:09:35.860 "compare_and_write": false, 00:09:35.860 "abort": false, 00:09:35.860 "seek_hole": true, 00:09:35.860 "seek_data": true, 00:09:35.860 "copy": false, 00:09:35.860 "nvme_iov_md": false 00:09:35.860 }, 00:09:35.860 "driver_specific": { 00:09:35.860 "lvol": { 00:09:35.860 "lvol_store_uuid": "71ec08ff-d248-4b41-874d-b35a65adac55", 00:09:35.860 "base_bdev": "aio_bdev", 00:09:35.860 "thin_provision": false, 00:09:35.860 "num_allocated_clusters": 38, 00:09:35.860 "snapshot": false, 00:09:35.860 "clone": false, 00:09:35.860 "esnap_clone": false 00:09:35.860 } 00:09:35.860 } 00:09:35.860 } 00:09:35.860 ] 00:09:35.860 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:35.860 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:35.860 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:36.120 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:36.120 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:36.120 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:36.379 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:36.379 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.638 [2024-11-22 14:47:51.072362] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:36.638 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:36.897 request: 00:09:36.897 { 00:09:36.897 "uuid": "71ec08ff-d248-4b41-874d-b35a65adac55", 00:09:36.897 "method": "bdev_lvol_get_lvstores", 00:09:36.897 "req_id": 1 00:09:36.897 } 00:09:36.897 Got JSON-RPC error response 00:09:36.897 response: 00:09:36.897 { 00:09:36.897 "code": -19, 00:09:36.897 "message": "No such device" 00:09:36.897 } 00:09:36.897 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:36.897 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.897 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:36.897 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.897 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:37.155 aio_bdev 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b5b77c08-55d7-4469-8445-351efcf93611 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b5b77c08-55d7-4469-8445-351efcf93611 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.155 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:37.414 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5b77c08-55d7-4469-8445-351efcf93611 -t 2000 00:09:37.673 [ 00:09:37.673 { 00:09:37.673 "name": "b5b77c08-55d7-4469-8445-351efcf93611", 00:09:37.673 "aliases": [ 00:09:37.673 "lvs/lvol" 00:09:37.673 ], 00:09:37.673 "product_name": "Logical Volume", 00:09:37.673 "block_size": 4096, 00:09:37.673 "num_blocks": 38912, 00:09:37.673 "uuid": "b5b77c08-55d7-4469-8445-351efcf93611", 00:09:37.673 "assigned_rate_limits": { 00:09:37.673 "rw_ios_per_sec": 0, 00:09:37.673 "rw_mbytes_per_sec": 0, 00:09:37.673 "r_mbytes_per_sec": 0, 00:09:37.673 "w_mbytes_per_sec": 0 00:09:37.673 }, 00:09:37.673 "claimed": false, 00:09:37.673 "zoned": false, 00:09:37.673 "supported_io_types": { 00:09:37.673 "read": true, 00:09:37.673 "write": true, 00:09:37.673 "unmap": true, 00:09:37.673 "flush": false, 00:09:37.673 "reset": true, 00:09:37.673 "nvme_admin": false, 00:09:37.673 "nvme_io": false, 00:09:37.673 "nvme_io_md": false, 00:09:37.673 "write_zeroes": true, 00:09:37.673 "zcopy": false, 00:09:37.673 "get_zone_info": false, 00:09:37.673 "zone_management": false, 00:09:37.673 "zone_append": false, 00:09:37.673 "compare": false, 00:09:37.673 "compare_and_write": false, 00:09:37.673 "abort": false, 00:09:37.673 "seek_hole": true, 00:09:37.673 "seek_data": true, 00:09:37.673 "copy": false, 00:09:37.673 "nvme_iov_md": false 00:09:37.673 }, 00:09:37.673 "driver_specific": { 00:09:37.673 "lvol": { 00:09:37.673 "lvol_store_uuid": "71ec08ff-d248-4b41-874d-b35a65adac55", 00:09:37.673 "base_bdev": "aio_bdev", 00:09:37.673 "thin_provision": false, 00:09:37.673 "num_allocated_clusters": 38, 00:09:37.673 "snapshot": false, 00:09:37.673 "clone": false, 00:09:37.673 "esnap_clone": false 00:09:37.673 } 00:09:37.673 } 00:09:37.673 } 00:09:37.673 ] 00:09:37.673 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:37.673 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:37.673 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.932 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.932 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:37.932 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:38.216 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:38.216 14:47:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b5b77c08-55d7-4469-8445-351efcf93611 00:09:38.494 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71ec08ff-d248-4b41-874d-b35a65adac55 00:09:38.753 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.011 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:39.579 ************************************ 00:09:39.579 END TEST lvs_grow_dirty 00:09:39.579 ************************************ 00:09:39.579 00:09:39.579 real 0m20.913s 00:09:39.580 user 0m41.403s 00:09:39.580 sys 0m9.315s 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:39.580 nvmf_trace.0 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.580 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.839 rmmod nvme_tcp 00:09:39.839 rmmod nvme_fabrics 00:09:39.839 rmmod nvme_keyring 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64041 ']' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64041 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64041 ']' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64041 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64041 00:09:39.839 killing process with pid 64041 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64041' 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64041 00:09:39.839 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64041 00:09:40.406 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.406 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.407 14:47:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:40.407 00:09:40.407 real 0m41.905s 00:09:40.407 user 1m5.243s 00:09:40.407 sys 0m13.109s 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.407 ************************************ 00:09:40.407 END TEST nvmf_lvs_grow 00:09:40.407 ************************************ 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.407 14:47:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.667 ************************************ 00:09:40.667 START TEST nvmf_bdev_io_wait 00:09:40.667 ************************************ 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.667 * Looking for test storage... 00:09:40.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.667 --rc genhtml_branch_coverage=1 00:09:40.667 --rc genhtml_function_coverage=1 00:09:40.667 --rc genhtml_legend=1 00:09:40.667 --rc geninfo_all_blocks=1 00:09:40.667 --rc geninfo_unexecuted_blocks=1 00:09:40.667 00:09:40.667 ' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.667 --rc genhtml_branch_coverage=1 00:09:40.667 --rc genhtml_function_coverage=1 00:09:40.667 --rc genhtml_legend=1 00:09:40.667 --rc geninfo_all_blocks=1 00:09:40.667 --rc geninfo_unexecuted_blocks=1 00:09:40.667 00:09:40.667 ' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.667 --rc genhtml_branch_coverage=1 00:09:40.667 --rc genhtml_function_coverage=1 00:09:40.667 --rc genhtml_legend=1 00:09:40.667 --rc geninfo_all_blocks=1 00:09:40.667 --rc geninfo_unexecuted_blocks=1 00:09:40.667 00:09:40.667 ' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.667 --rc genhtml_branch_coverage=1 00:09:40.667 --rc genhtml_function_coverage=1 00:09:40.667 --rc genhtml_legend=1 00:09:40.667 --rc geninfo_all_blocks=1 00:09:40.667 --rc geninfo_unexecuted_blocks=1 00:09:40.667 00:09:40.667 ' 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:40.667 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.668 Cannot find device "nvmf_init_br" 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.668 Cannot find device "nvmf_init_br2" 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:40.668 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.928 Cannot find device "nvmf_tgt_br" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.928 Cannot find device "nvmf_tgt_br2" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.928 Cannot find device "nvmf_init_br" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.928 Cannot find device "nvmf_init_br2" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:40.928 Cannot find device "nvmf_tgt_br" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:40.928 Cannot find device "nvmf_tgt_br2" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:40.928 Cannot find device "nvmf_br" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:40.928 Cannot find device "nvmf_init_if" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:40.928 Cannot find device "nvmf_init_if2" 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:40.928 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:09:41.187 00:09:41.187 --- 10.0.0.3 ping statistics --- 00:09:41.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.187 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:41.187 00:09:41.187 --- 10.0.0.4 ping statistics --- 00:09:41.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.187 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:41.187 00:09:41.187 --- 10.0.0.1 ping statistics --- 00:09:41.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.187 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:41.187 00:09:41.187 --- 10.0.0.2 ping statistics --- 00:09:41.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.187 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64418 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64418 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64418 ']' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.187 14:47:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.187 [2024-11-22 14:47:55.746571] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:41.187 [2024-11-22 14:47:55.746703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.446 [2024-11-22 14:47:55.892310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.446 [2024-11-22 14:47:55.970798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.446 [2024-11-22 14:47:55.970861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.446 [2024-11-22 14:47:55.970872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.446 [2024-11-22 14:47:55.970880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.446 [2024-11-22 14:47:55.970886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.446 [2024-11-22 14:47:55.972223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.446 [2024-11-22 14:47:55.972360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.446 [2024-11-22 14:47:55.972506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.446 [2024-11-22 14:47:55.972505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.446 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 [2024-11-22 14:47:56.142281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 [2024-11-22 14:47:56.155853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 Malloc0 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.705 [2024-11-22 14:47:56.215847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64451 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64453 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.705 { 00:09:41.705 "params": { 00:09:41.705 "name": "Nvme$subsystem", 00:09:41.705 "trtype": "$TEST_TRANSPORT", 00:09:41.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.705 "adrfam": "ipv4", 00:09:41.705 "trsvcid": "$NVMF_PORT", 00:09:41.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.705 "hdgst": ${hdgst:-false}, 00:09:41.705 "ddgst": ${ddgst:-false} 00:09:41.705 }, 00:09:41.705 "method": "bdev_nvme_attach_controller" 00:09:41.705 } 00:09:41.705 EOF 00:09:41.705 )") 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64455 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.705 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.705 { 00:09:41.705 "params": { 00:09:41.705 "name": "Nvme$subsystem", 00:09:41.705 "trtype": "$TEST_TRANSPORT", 00:09:41.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.705 "adrfam": "ipv4", 00:09:41.705 "trsvcid": "$NVMF_PORT", 00:09:41.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.705 "hdgst": ${hdgst:-false}, 00:09:41.706 "ddgst": ${ddgst:-false} 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 } 00:09:41.706 EOF 00:09:41.706 )") 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64457 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.706 { 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme$subsystem", 00:09:41.706 "trtype": "$TEST_TRANSPORT", 00:09:41.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "$NVMF_PORT", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.706 "hdgst": ${hdgst:-false}, 00:09:41.706 "ddgst": ${ddgst:-false} 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 } 00:09:41.706 EOF 00:09:41.706 )") 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.706 { 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme$subsystem", 00:09:41.706 "trtype": "$TEST_TRANSPORT", 00:09:41.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "$NVMF_PORT", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.706 "hdgst": ${hdgst:-false}, 00:09:41.706 "ddgst": ${ddgst:-false} 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 } 00:09:41.706 EOF 00:09:41.706 )") 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme1", 00:09:41.706 "trtype": "tcp", 00:09:41.706 "traddr": "10.0.0.3", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "4420", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.706 "hdgst": false, 00:09:41.706 "ddgst": false 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 }' 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme1", 00:09:41.706 "trtype": "tcp", 00:09:41.706 "traddr": "10.0.0.3", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "4420", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.706 "hdgst": false, 00:09:41.706 "ddgst": false 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 }' 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme1", 00:09:41.706 "trtype": "tcp", 00:09:41.706 "traddr": "10.0.0.3", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "4420", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.706 "hdgst": false, 00:09:41.706 "ddgst": false 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 }' 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.706 "params": { 00:09:41.706 "name": "Nvme1", 00:09:41.706 "trtype": "tcp", 00:09:41.706 "traddr": "10.0.0.3", 00:09:41.706 "adrfam": "ipv4", 00:09:41.706 "trsvcid": "4420", 00:09:41.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.706 "hdgst": false, 00:09:41.706 "ddgst": false 00:09:41.706 }, 00:09:41.706 "method": "bdev_nvme_attach_controller" 00:09:41.706 }' 00:09:41.706 [2024-11-22 14:47:56.286819] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:41.706 [2024-11-22 14:47:56.286921] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:41.706 14:47:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64451 00:09:41.706 [2024-11-22 14:47:56.295406] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:41.706 [2024-11-22 14:47:56.295492] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:41.706 [2024-11-22 14:47:56.304890] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:41.706 [2024-11-22 14:47:56.304968] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.706 [2024-11-22 14:47:56.320295] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:41.706 [2024-11-22 14:47:56.320460] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:41.966 [2024-11-22 14:47:56.546648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.966 [2024-11-22 14:47:56.606629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.966 [2024-11-22 14:47:56.620669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.224 [2024-11-22 14:47:56.629350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.224 [2024-11-22 14:47:56.694124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.224 [2024-11-22 14:47:56.702615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.224 [2024-11-22 14:47:56.717034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.224 [2024-11-22 14:47:56.754076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:42.224 [2024-11-22 14:47:56.768275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.224 Running I/O for 1 seconds... 00:09:42.224 [2024-11-22 14:47:56.797864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.224 [2024-11-22 14:47:56.855914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.224 [2024-11-22 14:47:56.869780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.224 Running I/O for 1 seconds... 00:09:42.482 Running I/O for 1 seconds... 00:09:42.482 Running I/O for 1 seconds... 00:09:43.419 6661.00 IOPS, 26.02 MiB/s 00:09:43.419 Latency(us) 00:09:43.419 [2024-11-22T14:47:58.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.419 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:43.419 Nvme1n1 : 1.01 6707.45 26.20 0.00 0.00 18965.50 4915.20 23473.80 00:09:43.419 [2024-11-22T14:47:58.084Z] =================================================================================================================== 00:09:43.419 [2024-11-22T14:47:58.084Z] Total : 6707.45 26.20 0.00 0.00 18965.50 4915.20 23473.80 00:09:43.419 5746.00 IOPS, 22.45 MiB/s 00:09:43.419 Latency(us) 00:09:43.419 [2024-11-22T14:47:58.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.419 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:43.419 Nvme1n1 : 1.01 5804.09 22.67 0.00 0.00 21906.92 8877.15 31933.91 00:09:43.419 [2024-11-22T14:47:58.084Z] =================================================================================================================== 00:09:43.419 [2024-11-22T14:47:58.084Z] Total : 5804.09 22.67 0.00 0.00 21906.92 8877.15 31933.91 00:09:43.419 6981.00 IOPS, 27.27 MiB/s 00:09:43.419 Latency(us) 00:09:43.419 [2024-11-22T14:47:58.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.419 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.419 Nvme1n1 : 1.01 7073.85 27.63 0.00 0.00 18015.44 6851.49 27525.12 00:09:43.419 [2024-11-22T14:47:58.084Z] =================================================================================================================== 00:09:43.419 [2024-11-22T14:47:58.084Z] Total : 7073.85 27.63 0.00 0.00 18015.44 6851.49 27525.12 00:09:43.419 14:47:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64453 00:09:43.419 166528.00 IOPS, 650.50 MiB/s 00:09:43.419 Latency(us) 00:09:43.419 [2024-11-22T14:47:58.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.419 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:43.419 Nvme1n1 : 1.00 166183.03 649.15 0.00 0.00 766.12 420.77 2040.55 00:09:43.419 [2024-11-22T14:47:58.084Z] =================================================================================================================== 00:09:43.419 [2024-11-22T14:47:58.084Z] Total : 166183.03 649.15 0.00 0.00 766.12 420.77 2040.55 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64455 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64457 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.678 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.679 rmmod nvme_tcp 00:09:43.679 rmmod nvme_fabrics 00:09:43.679 rmmod nvme_keyring 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64418 ']' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64418 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64418 ']' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64418 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64418 00:09:43.679 killing process with pid 64418 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64418' 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64418 00:09:43.679 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64418 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:43.938 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:44.197 00:09:44.197 real 0m3.765s 00:09:44.197 user 0m14.925s 00:09:44.197 sys 0m2.343s 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.197 ************************************ 00:09:44.197 END TEST nvmf_bdev_io_wait 00:09:44.197 ************************************ 00:09:44.197 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.457 ************************************ 00:09:44.457 START TEST nvmf_queue_depth 00:09:44.457 ************************************ 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:44.457 * Looking for test storage... 00:09:44.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.457 14:47:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.457 --rc genhtml_branch_coverage=1 00:09:44.457 --rc genhtml_function_coverage=1 00:09:44.457 --rc genhtml_legend=1 00:09:44.457 --rc geninfo_all_blocks=1 00:09:44.457 --rc geninfo_unexecuted_blocks=1 00:09:44.457 00:09:44.457 ' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.457 --rc genhtml_branch_coverage=1 00:09:44.457 --rc genhtml_function_coverage=1 00:09:44.457 --rc genhtml_legend=1 00:09:44.457 --rc geninfo_all_blocks=1 00:09:44.457 --rc geninfo_unexecuted_blocks=1 00:09:44.457 00:09:44.457 ' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.457 --rc genhtml_branch_coverage=1 00:09:44.457 --rc genhtml_function_coverage=1 00:09:44.457 --rc genhtml_legend=1 00:09:44.457 --rc geninfo_all_blocks=1 00:09:44.457 --rc geninfo_unexecuted_blocks=1 00:09:44.457 00:09:44.457 ' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.457 --rc genhtml_branch_coverage=1 00:09:44.457 --rc genhtml_function_coverage=1 00:09:44.457 --rc genhtml_legend=1 00:09:44.457 --rc geninfo_all_blocks=1 00:09:44.457 --rc geninfo_unexecuted_blocks=1 00:09:44.457 00:09:44.457 ' 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.457 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.458 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:44.458 Cannot find device "nvmf_init_br" 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:44.458 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:44.717 Cannot find device "nvmf_init_br2" 00:09:44.717 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:44.717 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:44.717 Cannot find device "nvmf_tgt_br" 00:09:44.717 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:44.717 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.717 Cannot find device "nvmf_tgt_br2" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:44.718 Cannot find device "nvmf_init_br" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:44.718 Cannot find device "nvmf_init_br2" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:44.718 Cannot find device "nvmf_tgt_br" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.718 Cannot find device "nvmf_tgt_br2" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.718 Cannot find device "nvmf_br" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.718 Cannot find device "nvmf_init_if" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.718 Cannot find device "nvmf_init_if2" 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.718 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:09:44.982 00:09:44.982 --- 10.0.0.3 ping statistics --- 00:09:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.982 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:44.982 00:09:44.982 --- 10.0.0.4 ping statistics --- 00:09:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.982 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:44.982 00:09:44.982 --- 10.0.0.1 ping statistics --- 00:09:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.982 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:44.982 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:44.982 00:09:44.982 --- 10.0.0.2 ping statistics --- 00:09:44.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.982 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64717 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64717 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64717 ']' 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.983 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 [2024-11-22 14:47:59.598356] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:44.983 [2024-11-22 14:47:59.598452] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.241 [2024-11-22 14:47:59.750923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.242 [2024-11-22 14:47:59.827909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.242 [2024-11-22 14:47:59.828009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.242 [2024-11-22 14:47:59.828042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.242 [2024-11-22 14:47:59.828053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.242 [2024-11-22 14:47:59.828064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.242 [2024-11-22 14:47:59.828680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.242 [2024-11-22 14:47:59.901156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.500 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.500 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:45.500 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.500 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.500 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 [2024-11-22 14:48:00.027124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 Malloc0 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.500 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.501 [2024-11-22 14:48:00.088499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64736 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64736 /var/tmp/bdevperf.sock 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64736 ']' 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.501 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.501 [2024-11-22 14:48:00.153298] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:45.501 [2024-11-22 14:48:00.153437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64736 ] 00:09:45.759 [2024-11-22 14:48:00.305868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.759 [2024-11-22 14:48:00.380128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.019 [2024-11-22 14:48:00.457656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.019 NVMe0n1 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.019 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:46.278 Running I/O for 10 seconds... 00:09:48.152 7168.00 IOPS, 28.00 MiB/s [2024-11-22T14:48:03.762Z] 7547.00 IOPS, 29.48 MiB/s [2024-11-22T14:48:05.139Z] 7751.67 IOPS, 30.28 MiB/s [2024-11-22T14:48:06.075Z] 7960.50 IOPS, 31.10 MiB/s [2024-11-22T14:48:07.010Z] 8209.20 IOPS, 32.07 MiB/s [2024-11-22T14:48:07.945Z] 8368.67 IOPS, 32.69 MiB/s [2024-11-22T14:48:08.880Z] 8432.71 IOPS, 32.94 MiB/s [2024-11-22T14:48:09.813Z] 8458.25 IOPS, 33.04 MiB/s [2024-11-22T14:48:10.748Z] 8544.78 IOPS, 33.38 MiB/s [2024-11-22T14:48:11.007Z] 8614.00 IOPS, 33.65 MiB/s 00:09:56.342 Latency(us) 00:09:56.342 [2024-11-22T14:48:11.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.342 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:56.342 Verification LBA range: start 0x0 length 0x4000 00:09:56.342 NVMe0n1 : 10.08 8645.66 33.77 0.00 0.00 117968.59 22639.71 90082.21 00:09:56.342 [2024-11-22T14:48:11.007Z] =================================================================================================================== 00:09:56.342 [2024-11-22T14:48:11.007Z] Total : 8645.66 33.77 0.00 0.00 117968.59 22639.71 90082.21 00:09:56.342 { 00:09:56.342 "results": [ 00:09:56.342 { 00:09:56.342 "job": "NVMe0n1", 00:09:56.342 "core_mask": "0x1", 00:09:56.342 "workload": "verify", 00:09:56.342 "status": "finished", 00:09:56.342 "verify_range": { 00:09:56.342 "start": 0, 00:09:56.342 "length": 16384 00:09:56.342 }, 00:09:56.342 "queue_depth": 1024, 00:09:56.342 "io_size": 4096, 00:09:56.342 "runtime": 10.081827, 00:09:56.342 "iops": 8645.655197217726, 00:09:56.342 "mibps": 33.77209061413174, 00:09:56.342 "io_failed": 0, 00:09:56.342 "io_timeout": 0, 00:09:56.342 "avg_latency_us": 117968.59420961949, 00:09:56.342 "min_latency_us": 22639.70909090909, 00:09:56.342 "max_latency_us": 90082.2109090909 00:09:56.342 } 00:09:56.342 ], 00:09:56.342 "core_count": 1 00:09:56.342 } 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64736 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64736 ']' 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64736 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64736 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.342 killing process with pid 64736 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64736' 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64736 00:09:56.342 Received shutdown signal, test time was about 10.000000 seconds 00:09:56.342 00:09:56.342 Latency(us) 00:09:56.342 [2024-11-22T14:48:11.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.342 [2024-11-22T14:48:11.007Z] =================================================================================================================== 00:09:56.342 [2024-11-22T14:48:11.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:56.342 14:48:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64736 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.601 rmmod nvme_tcp 00:09:56.601 rmmod nvme_fabrics 00:09:56.601 rmmod nvme_keyring 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64717 ']' 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64717 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64717 ']' 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64717 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64717 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:56.601 killing process with pid 64717 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64717' 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64717 00:09:56.601 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64717 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:57.168 00:09:57.168 real 0m12.925s 00:09:57.168 user 0m21.530s 00:09:57.168 sys 0m2.459s 00:09:57.168 ************************************ 00:09:57.168 END TEST nvmf_queue_depth 00:09:57.168 ************************************ 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.168 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.427 ************************************ 00:09:57.427 START TEST nvmf_target_multipath 00:09:57.427 ************************************ 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:57.427 * Looking for test storage... 00:09:57.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.427 14:48:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.427 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.428 --rc genhtml_branch_coverage=1 00:09:57.428 --rc genhtml_function_coverage=1 00:09:57.428 --rc genhtml_legend=1 00:09:57.428 --rc geninfo_all_blocks=1 00:09:57.428 --rc geninfo_unexecuted_blocks=1 00:09:57.428 00:09:57.428 ' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.428 --rc genhtml_branch_coverage=1 00:09:57.428 --rc genhtml_function_coverage=1 00:09:57.428 --rc genhtml_legend=1 00:09:57.428 --rc geninfo_all_blocks=1 00:09:57.428 --rc geninfo_unexecuted_blocks=1 00:09:57.428 00:09:57.428 ' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.428 --rc genhtml_branch_coverage=1 00:09:57.428 --rc genhtml_function_coverage=1 00:09:57.428 --rc genhtml_legend=1 00:09:57.428 --rc geninfo_all_blocks=1 00:09:57.428 --rc geninfo_unexecuted_blocks=1 00:09:57.428 00:09:57.428 ' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.428 --rc genhtml_branch_coverage=1 00:09:57.428 --rc genhtml_function_coverage=1 00:09:57.428 --rc genhtml_legend=1 00:09:57.428 --rc geninfo_all_blocks=1 00:09:57.428 --rc geninfo_unexecuted_blocks=1 00:09:57.428 00:09:57.428 ' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.428 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.687 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.688 Cannot find device "nvmf_init_br" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.688 Cannot find device "nvmf_init_br2" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.688 Cannot find device "nvmf_tgt_br" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.688 Cannot find device "nvmf_tgt_br2" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.688 Cannot find device "nvmf_init_br" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.688 Cannot find device "nvmf_init_br2" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.688 Cannot find device "nvmf_tgt_br" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.688 Cannot find device "nvmf_tgt_br2" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.688 Cannot find device "nvmf_br" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.688 Cannot find device "nvmf_init_if" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.688 Cannot find device "nvmf_init_if2" 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.688 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:09:57.947 00:09:57.947 --- 10.0.0.3 ping statistics --- 00:09:57.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.947 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.947 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.947 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:09:57.947 00:09:57.947 --- 10.0.0.4 ping statistics --- 00:09:57.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.947 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:57.947 00:09:57.947 --- 10.0.0.1 ping statistics --- 00:09:57.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.947 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:57.947 00:09:57.947 --- 10.0.0.2 ping statistics --- 00:09:57.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.947 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65110 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65110 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65110 ']' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.947 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.947 [2024-11-22 14:48:12.541070] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:09:57.947 [2024-11-22 14:48:12.541150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.206 [2024-11-22 14:48:12.696736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.206 [2024-11-22 14:48:12.778355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.206 [2024-11-22 14:48:12.778776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.206 [2024-11-22 14:48:12.778956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.206 [2024-11-22 14:48:12.779097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.206 [2024-11-22 14:48:12.779146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.206 [2024-11-22 14:48:12.780776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.206 [2024-11-22 14:48:12.780916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.206 [2024-11-22 14:48:12.781533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.206 [2024-11-22 14:48:12.781543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.206 [2024-11-22 14:48:12.860072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.465 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.465 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:58.465 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.466 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.466 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:58.466 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.466 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.736 [2024-11-22 14:48:13.286692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.736 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:59.005 Malloc0 00:09:59.005 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:59.264 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.523 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:59.782 [2024-11-22 14:48:14.359921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:59.782 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:00.040 [2024-11-22 14:48:14.664177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:00.040 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:00.299 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:02.830 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65192 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:02.831 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:02.831 [global] 00:10:02.831 thread=1 00:10:02.831 invalidate=1 00:10:02.831 rw=randrw 00:10:02.831 time_based=1 00:10:02.831 runtime=6 00:10:02.831 ioengine=libaio 00:10:02.831 direct=1 00:10:02.831 bs=4096 00:10:02.831 iodepth=128 00:10:02.831 norandommap=0 00:10:02.831 numjobs=1 00:10:02.831 00:10:02.831 verify_dump=1 00:10:02.831 verify_backlog=512 00:10:02.831 verify_state_save=0 00:10:02.831 do_verify=1 00:10:02.831 verify=crc32c-intel 00:10:02.831 [job0] 00:10:02.831 filename=/dev/nvme0n1 00:10:02.831 Could not set queue depth (nvme0n1) 00:10:02.831 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.831 fio-3.35 00:10:02.831 Starting 1 thread 00:10:03.398 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:03.657 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:03.915 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:04.174 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.433 14:48:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65192 00:10:09.699 00:10:09.699 job0: (groupid=0, jobs=1): err= 0: pid=65213: Fri Nov 22 14:48:23 2024 00:10:09.699 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(235MiB/6006msec) 00:10:09.699 slat (usec): min=4, max=6213, avg=59.56, stdev=231.91 00:10:09.699 clat (usec): min=1596, max=17425, avg=8739.42, stdev=1596.49 00:10:09.699 lat (usec): min=1608, max=17607, avg=8798.99, stdev=1602.80 00:10:09.699 clat percentiles (usec): 00:10:09.699 | 1.00th=[ 4490], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7767], 00:10:09.699 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:10:09.699 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[11994], 00:10:09.699 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15533], 99.95th=[15795], 00:10:09.699 | 99.99th=[16909] 00:10:09.699 bw ( KiB/s): min= 5032, max=26728, per=51.20%, avg=20515.64, stdev=6959.62, samples=11 00:10:09.699 iops : min= 1258, max= 6682, avg=5128.91, stdev=1739.90, samples=11 00:10:09.699 write: IOPS=5908, BW=23.1MiB/s (24.2MB/s)(121MiB/5248msec); 0 zone resets 00:10:09.699 slat (usec): min=15, max=3111, avg=67.70, stdev=165.02 00:10:09.699 clat (usec): min=2586, max=17417, avg=7613.25, stdev=1437.26 00:10:09.699 lat (usec): min=2611, max=17439, avg=7680.96, stdev=1442.25 00:10:09.699 clat percentiles (usec): 00:10:09.699 | 1.00th=[ 3359], 5.00th=[ 4555], 10.00th=[ 6063], 20.00th=[ 6915], 00:10:09.699 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:10:09.699 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9503], 00:10:09.699 | 99.00th=[11469], 99.50th=[12387], 99.90th=[14091], 99.95th=[15401], 00:10:09.699 | 99.99th=[16909] 00:10:09.699 bw ( KiB/s): min= 5336, max=26344, per=87.16%, avg=20600.73, stdev=6783.56, samples=11 00:10:09.699 iops : min= 1334, max= 6586, avg=5150.18, stdev=1695.89, samples=11 00:10:09.700 lat (msec) : 2=0.01%, 4=1.31%, 10=89.10%, 20=9.59% 00:10:09.700 cpu : usr=5.21%, sys=21.83%, ctx=5288, majf=0, minf=90 00:10:09.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:09.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.700 issued rwts: total=60158,31008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.700 00:10:09.700 Run status group 0 (all jobs): 00:10:09.700 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=235MiB (246MB), run=6006-6006msec 00:10:09.700 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=121MiB (127MB), run=5248-5248msec 00:10:09.700 00:10:09.700 Disk stats (read/write): 00:10:09.700 nvme0n1: ios=59407/30208, merge=0/0, ticks=497991/216128, in_queue=714119, util=98.70% 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65300 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:09.700 14:48:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:09.700 [global] 00:10:09.700 thread=1 00:10:09.700 invalidate=1 00:10:09.700 rw=randrw 00:10:09.700 time_based=1 00:10:09.700 runtime=6 00:10:09.700 ioengine=libaio 00:10:09.700 direct=1 00:10:09.700 bs=4096 00:10:09.700 iodepth=128 00:10:09.700 norandommap=0 00:10:09.700 numjobs=1 00:10:09.700 00:10:09.700 verify_dump=1 00:10:09.700 verify_backlog=512 00:10:09.700 verify_state_save=0 00:10:09.700 do_verify=1 00:10:09.700 verify=crc32c-intel 00:10:09.700 [job0] 00:10:09.700 filename=/dev/nvme0n1 00:10:09.700 Could not set queue depth (nvme0n1) 00:10:09.700 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:09.700 fio-3.35 00:10:09.700 Starting 1 thread 00:10:10.645 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:10.645 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:10.904 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:11.471 14:48:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:11.729 14:48:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65300 00:10:15.915 00:10:15.915 job0: (groupid=0, jobs=1): err= 0: pid=65321: Fri Nov 22 14:48:30 2024 00:10:15.915 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(260MiB/6006msec) 00:10:15.915 slat (usec): min=5, max=5802, avg=44.81, stdev=193.06 00:10:15.915 clat (usec): min=271, max=23728, avg=7879.73, stdev=2544.99 00:10:15.915 lat (usec): min=289, max=23738, avg=7924.54, stdev=2549.80 00:10:15.915 clat percentiles (usec): 00:10:15.915 | 1.00th=[ 1188], 5.00th=[ 3490], 10.00th=[ 4883], 20.00th=[ 6718], 00:10:15.915 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8160], 00:10:15.915 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10683], 95.00th=[12256], 00:10:15.915 | 99.00th=[16450], 99.50th=[18220], 99.90th=[20841], 99.95th=[21627], 00:10:15.915 | 99.99th=[23725] 00:10:15.915 bw ( KiB/s): min=10768, max=27840, per=52.79%, avg=23440.73, stdev=4840.41, samples=11 00:10:15.915 iops : min= 2692, max= 6960, avg=5860.18, stdev=1210.10, samples=11 00:10:15.915 write: IOPS=6529, BW=25.5MiB/s (26.7MB/s)(138MiB/5425msec); 0 zone resets 00:10:15.915 slat (usec): min=11, max=2054, avg=55.13, stdev=133.95 00:10:15.915 clat (usec): min=280, max=22874, avg=6696.29, stdev=2036.51 00:10:15.915 lat (usec): min=312, max=22896, avg=6751.42, stdev=2042.62 00:10:15.915 clat percentiles (usec): 00:10:15.915 | 1.00th=[ 1549], 5.00th=[ 2999], 10.00th=[ 3818], 20.00th=[ 5080], 00:10:15.915 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:10:15.915 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8291], 95.00th=[ 9503], 00:10:15.915 | 99.00th=[12911], 99.50th=[13566], 99.90th=[17433], 99.95th=[18220], 00:10:15.915 | 99.99th=[19792] 00:10:15.915 bw ( KiB/s): min=11032, max=28672, per=89.77%, avg=23445.82, stdev=4768.48, samples=11 00:10:15.915 iops : min= 2758, max= 7168, avg=5861.45, stdev=1192.12, samples=11 00:10:15.915 lat (usec) : 500=0.08%, 750=0.21%, 1000=0.30% 00:10:15.915 lat (msec) : 2=1.44%, 4=6.39%, 10=82.55%, 20=8.89%, 50=0.14% 00:10:15.915 cpu : usr=5.60%, sys=22.85%, ctx=6040, majf=0, minf=72 00:10:15.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:15.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.915 issued rwts: total=66670,35422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.915 00:10:15.915 Run status group 0 (all jobs): 00:10:15.915 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=260MiB (273MB), run=6006-6006msec 00:10:15.915 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=138MiB (145MB), run=5425-5425msec 00:10:15.915 00:10:15.915 Disk stats (read/write): 00:10:15.915 nvme0n1: ios=65744/34744, merge=0/0, ticks=496312/218342, in_queue=714654, util=98.67% 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:15.915 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.174 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.433 rmmod nvme_tcp 00:10:16.433 rmmod nvme_fabrics 00:10:16.433 rmmod nvme_keyring 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65110 ']' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65110 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65110 ']' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65110 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65110 00:10:16.433 killing process with pid 65110 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65110' 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65110 00:10:16.433 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65110 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:16.692 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:16.951 00:10:16.951 real 0m19.611s 00:10:16.951 user 1m13.142s 00:10:16.951 sys 0m9.197s 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.951 ************************************ 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.951 END TEST nvmf_target_multipath 00:10:16.951 ************************************ 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.951 ************************************ 00:10:16.951 START TEST nvmf_zcopy 00:10:16.951 ************************************ 00:10:16.951 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:16.951 * Looking for test storage... 00:10:17.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.211 --rc genhtml_branch_coverage=1 00:10:17.211 --rc genhtml_function_coverage=1 00:10:17.211 --rc genhtml_legend=1 00:10:17.211 --rc geninfo_all_blocks=1 00:10:17.211 --rc geninfo_unexecuted_blocks=1 00:10:17.211 00:10:17.211 ' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.211 --rc genhtml_branch_coverage=1 00:10:17.211 --rc genhtml_function_coverage=1 00:10:17.211 --rc genhtml_legend=1 00:10:17.211 --rc geninfo_all_blocks=1 00:10:17.211 --rc geninfo_unexecuted_blocks=1 00:10:17.211 00:10:17.211 ' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.211 --rc genhtml_branch_coverage=1 00:10:17.211 --rc genhtml_function_coverage=1 00:10:17.211 --rc genhtml_legend=1 00:10:17.211 --rc geninfo_all_blocks=1 00:10:17.211 --rc geninfo_unexecuted_blocks=1 00:10:17.211 00:10:17.211 ' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.211 --rc genhtml_branch_coverage=1 00:10:17.211 --rc genhtml_function_coverage=1 00:10:17.211 --rc genhtml_legend=1 00:10:17.211 --rc geninfo_all_blocks=1 00:10:17.211 --rc geninfo_unexecuted_blocks=1 00:10:17.211 00:10:17.211 ' 00:10:17.211 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.212 Cannot find device "nvmf_init_br" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.212 Cannot find device "nvmf_init_br2" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.212 Cannot find device "nvmf_tgt_br" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.212 Cannot find device "nvmf_tgt_br2" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.212 Cannot find device "nvmf_init_br" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.212 Cannot find device "nvmf_init_br2" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:17.212 Cannot find device "nvmf_tgt_br" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:17.212 Cannot find device "nvmf_tgt_br2" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:17.212 Cannot find device "nvmf_br" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:17.212 Cannot find device "nvmf_init_if" 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:17.212 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:17.471 Cannot find device "nvmf_init_if2" 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.471 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:17.471 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:17.471 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:17.471 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.472 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:17.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:17.731 00:10:17.731 --- 10.0.0.3 ping statistics --- 00:10:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.731 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:17.731 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:17.731 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:10:17.731 00:10:17.731 --- 10.0.0.4 ping statistics --- 00:10:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.731 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:17.731 00:10:17.731 --- 10.0.0.1 ping statistics --- 00:10:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.731 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:17.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:10:17.731 00:10:17.731 --- 10.0.0.2 ping statistics --- 00:10:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.731 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65622 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65622 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65622 ']' 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.731 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.731 [2024-11-22 14:48:32.260303] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:10:17.731 [2024-11-22 14:48:32.260410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.991 [2024-11-22 14:48:32.407769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.991 [2024-11-22 14:48:32.457748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.991 [2024-11-22 14:48:32.457815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.991 [2024-11-22 14:48:32.457825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.991 [2024-11-22 14:48:32.457832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.991 [2024-11-22 14:48:32.457838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.991 [2024-11-22 14:48:32.458228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.991 [2024-11-22 14:48:32.530628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.991 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 [2024-11-22 14:48:32.658472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 [2024-11-22 14:48:32.675308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 malloc0 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.252 { 00:10:18.252 "params": { 00:10:18.252 "name": "Nvme$subsystem", 00:10:18.252 "trtype": "$TEST_TRANSPORT", 00:10:18.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.252 "adrfam": "ipv4", 00:10:18.252 "trsvcid": "$NVMF_PORT", 00:10:18.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.252 "hdgst": ${hdgst:-false}, 00:10:18.252 "ddgst": ${ddgst:-false} 00:10:18.252 }, 00:10:18.252 "method": "bdev_nvme_attach_controller" 00:10:18.252 } 00:10:18.252 EOF 00:10:18.252 )") 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.252 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.252 "params": { 00:10:18.252 "name": "Nvme1", 00:10:18.252 "trtype": "tcp", 00:10:18.252 "traddr": "10.0.0.3", 00:10:18.252 "adrfam": "ipv4", 00:10:18.252 "trsvcid": "4420", 00:10:18.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.252 "hdgst": false, 00:10:18.252 "ddgst": false 00:10:18.252 }, 00:10:18.252 "method": "bdev_nvme_attach_controller" 00:10:18.252 }' 00:10:18.252 [2024-11-22 14:48:32.783601] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:10:18.252 [2024-11-22 14:48:32.783707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65646 ] 00:10:18.511 [2024-11-22 14:48:32.939037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.511 [2024-11-22 14:48:33.005489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.511 [2024-11-22 14:48:33.089876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.770 Running I/O for 10 seconds... 00:10:20.643 6108.00 IOPS, 47.72 MiB/s [2024-11-22T14:48:36.244Z] 5928.50 IOPS, 46.32 MiB/s [2024-11-22T14:48:37.238Z] 5909.33 IOPS, 46.17 MiB/s [2024-11-22T14:48:38.614Z] 5914.25 IOPS, 46.21 MiB/s [2024-11-22T14:48:39.548Z] 5911.00 IOPS, 46.18 MiB/s [2024-11-22T14:48:40.482Z] 5907.00 IOPS, 46.15 MiB/s [2024-11-22T14:48:41.418Z] 5906.29 IOPS, 46.14 MiB/s [2024-11-22T14:48:42.354Z] 5902.12 IOPS, 46.11 MiB/s [2024-11-22T14:48:43.290Z] 5904.78 IOPS, 46.13 MiB/s [2024-11-22T14:48:43.290Z] 5902.60 IOPS, 46.11 MiB/s 00:10:28.625 Latency(us) 00:10:28.625 [2024-11-22T14:48:43.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.625 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.625 Verification LBA range: start 0x0 length 0x1000 00:10:28.625 Nvme1n1 : 10.02 5906.19 46.14 0.00 0.00 21602.73 2398.02 32648.84 00:10:28.625 [2024-11-22T14:48:43.290Z] =================================================================================================================== 00:10:28.625 [2024-11-22T14:48:43.290Z] Total : 5906.19 46.14 0.00 0.00 21602.73 2398.02 32648.84 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65765 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:28.884 { 00:10:28.884 "params": { 00:10:28.884 "name": "Nvme$subsystem", 00:10:28.884 "trtype": "$TEST_TRANSPORT", 00:10:28.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.884 "adrfam": "ipv4", 00:10:28.884 "trsvcid": "$NVMF_PORT", 00:10:28.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.884 "hdgst": ${hdgst:-false}, 00:10:28.884 "ddgst": ${ddgst:-false} 00:10:28.884 }, 00:10:28.884 "method": "bdev_nvme_attach_controller" 00:10:28.884 } 00:10:28.884 EOF 00:10:28.884 )") 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:28.884 [2024-11-22 14:48:43.515514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.884 [2024-11-22 14:48:43.515579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:28.884 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.884 "params": { 00:10:28.884 "name": "Nvme1", 00:10:28.884 "trtype": "tcp", 00:10:28.884 "traddr": "10.0.0.3", 00:10:28.884 "adrfam": "ipv4", 00:10:28.884 "trsvcid": "4420", 00:10:28.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.884 "hdgst": false, 00:10:28.884 "ddgst": false 00:10:28.884 }, 00:10:28.884 "method": "bdev_nvme_attach_controller" 00:10:28.884 }' 00:10:28.884 [2024-11-22 14:48:43.523458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.884 [2024-11-22 14:48:43.523486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.884 [2024-11-22 14:48:43.531449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.884 [2024-11-22 14:48:43.531493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.884 [2024-11-22 14:48:43.539449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.884 [2024-11-22 14:48:43.539478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.143 [2024-11-22 14:48:43.547451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.143 [2024-11-22 14:48:43.547482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.143 [2024-11-22 14:48:43.559456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.559487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.567457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.567486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.569490] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:10:29.144 [2024-11-22 14:48:43.569581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65765 ] 00:10:29.144 [2024-11-22 14:48:43.575463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.575487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.583461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.583487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.591480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.591510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.603514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.603543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.611512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.611542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.619512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.619542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.627528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.627558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.635528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.635576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.643516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.643561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.651528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.651564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.659528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.659573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.667535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.667584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.675539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.675586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.683538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.683583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.691545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.691579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.699552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.699630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.707541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.707588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.715541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.715587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.717729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.144 [2024-11-22 14:48:43.723564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.723595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.731578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.731627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.739568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.739600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.747574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.747624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.755566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.755612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.767567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.767613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.775569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.775616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.783571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.783616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.791573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.791619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.144 [2024-11-22 14:48:43.796242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.144 [2024-11-22 14:48:43.799585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.144 [2024-11-22 14:48:43.799616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.811605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.811640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.819598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.819645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.831642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.831704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.839617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.839667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.847616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.847666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.859641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.859684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.867622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.867671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.875616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.875663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.878348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.404 [2024-11-22 14:48:43.887659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.887726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.895628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.895677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.907664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.907721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.915622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.915667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.923623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.923670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.931622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.931667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.939650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.939703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.947644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.947694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.955647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.955697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.963664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.963740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.971665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.971714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.979671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.979737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.987671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.987728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:43.995685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:43.995739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:44.003688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:44.003737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 Running I/O for 5 seconds... 00:10:29.404 [2024-11-22 14:48:44.011711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:44.011779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:44.025850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.404 [2024-11-22 14:48:44.025901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.404 [2024-11-22 14:48:44.035368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.405 [2024-11-22 14:48:44.035475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.405 [2024-11-22 14:48:44.049641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.405 [2024-11-22 14:48:44.049721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.405 [2024-11-22 14:48:44.058630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.405 [2024-11-22 14:48:44.058686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.073881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.073934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.089886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.089934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.105966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.106019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.124310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.124394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.138893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.138954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.150097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.150148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.158948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.158998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.170585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.170636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.180335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.180411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.191230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.191297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.207158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.207212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.224174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.224226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.233539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.233575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.244339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.244418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.255257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.255304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.263353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.263428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.275928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.275992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.292200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.292259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.310107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.310155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.664 [2024-11-22 14:48:44.320199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.664 [2024-11-22 14:48:44.320252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.330353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.330430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.339705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.339781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.357156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.357242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.373893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.373943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.385170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.385220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.393633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.393684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.407653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.407705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.416551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.416603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.426894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.426946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.436757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.436810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.446575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.446626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.456730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.456765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.466638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.466691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.476473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.476525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.486248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.486299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.495861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.495911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.505979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.506042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.520592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.520646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.529925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.529971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.923 [2024-11-22 14:48:44.542304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.923 [2024-11-22 14:48:44.542356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.924 [2024-11-22 14:48:44.551415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.924 [2024-11-22 14:48:44.551495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.924 [2024-11-22 14:48:44.563483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.924 [2024-11-22 14:48:44.563535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.924 [2024-11-22 14:48:44.572557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.924 [2024-11-22 14:48:44.572608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.924 [2024-11-22 14:48:44.584793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.924 [2024-11-22 14:48:44.584844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.596036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.596087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.612639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.612691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.622367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.622461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.637239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.637308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.654333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.654410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.664933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.664965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.680136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.680188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.695149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.695200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.704970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.705021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.716839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.716889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.727502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.727534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.739329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.739404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.748622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.748675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.763902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.763954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.773355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.773429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.789806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.789865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.807568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.807604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.817640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.817695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.183 [2024-11-22 14:48:44.831541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.183 [2024-11-22 14:48:44.831595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.846455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.846517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.864266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.864316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.880127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.880178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.891245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.891297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.907121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.907176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.922520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.922572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.933378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.933456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.941293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.941345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.953658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.953712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.970614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.970663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:44.988132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:44.988183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.002261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.002330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.010910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.010960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 12508.00 IOPS, 97.72 MiB/s [2024-11-22T14:48:45.107Z] [2024-11-22 14:48:45.023242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.023293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.039087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.039136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.056420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.056483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.065983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.066036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.075875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.075927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.085319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.085397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.442 [2024-11-22 14:48:45.099065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.442 [2024-11-22 14:48:45.099114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.701 [2024-11-22 14:48:45.107266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.701 [2024-11-22 14:48:45.107317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.701 [2024-11-22 14:48:45.118983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.701 [2024-11-22 14:48:45.119034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.701 [2024-11-22 14:48:45.130489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.130550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.138887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.138945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.153758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.153817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.162668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.162719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.174555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.174606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.188427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.188477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.205555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.205605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.215971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.216023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.231207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.231277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.246817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.246862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.256740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.256790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.267890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.267940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.281842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.281891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.290211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.290260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.304899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.304948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.313763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.313811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.330790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.330841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.348065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.348124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.702 [2024-11-22 14:48:45.363151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.702 [2024-11-22 14:48:45.363203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.372467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.372516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.384143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.384193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.399697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.399749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.411152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.411201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.427404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.427491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.444366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.444451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.455004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.455052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.471158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.471206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.487799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.487853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.505504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.505553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.514499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.514545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.525085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.525133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.533377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.533457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.544863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.544913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.553593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.553642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.564964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.565016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.573644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.573695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.584101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.584149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.592433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.592481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.604026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.604074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.961 [2024-11-22 14:48:45.614373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.961 [2024-11-22 14:48:45.614459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.630581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.630627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.641070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.641117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.656870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.656920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.668117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.668166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.676348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.676426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.688010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.688060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.699364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.699477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.707536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.707583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.723087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.723136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.733327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.733403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.744167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.744204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.754656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.754689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.765525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.765576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.778260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.220 [2024-11-22 14:48:45.778308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.220 [2024-11-22 14:48:45.787468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.787516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.802220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.802267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.812025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.812075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.825395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.825460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.840430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.840489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.856747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.856796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.865708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.865774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.221 [2024-11-22 14:48:45.875927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.221 [2024-11-22 14:48:45.875976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.890324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.890401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.901522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.901571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.917422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.917468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.933665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.933714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.944968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.945016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.959917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.959987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.968201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.968248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.977356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.977429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.988172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.988221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:45.996157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:45.996205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.008696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.008760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 12769.00 IOPS, 99.76 MiB/s [2024-11-22T14:48:46.145Z] [2024-11-22 14:48:46.024306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.024411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.040744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.040792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.051077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.051128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.063676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.063713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.078815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.078862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.095181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.095237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.105564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.105600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.117348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.117425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.127886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.127941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.480 [2024-11-22 14:48:46.140333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.480 [2024-11-22 14:48:46.140386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.149727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.149774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.161310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.161359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.172504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.172553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.180894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.180942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.190966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.191014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.199936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.199984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.208975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.209023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.218433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.218480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.228899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.228947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.241717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.241781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.258570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.258603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.275840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.275889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.292431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.292508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.303076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.303124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.320043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.320092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.335761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.335843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.346461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.346509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.353813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.353859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.365494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.365542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.380497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.380546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.739 [2024-11-22 14:48:46.398093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.739 [2024-11-22 14:48:46.398141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.409140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.409188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.417564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.417612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.428944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.428992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.437889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.437934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.448996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.449056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.457499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.457547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.467866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.467912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.476435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.476493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.490070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.490117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.498127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.498173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.509314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.509364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.518090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.518138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.527559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.527592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.536814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.536861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.545960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.546008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.555214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.555261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.564836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.564882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.574139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.574187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.588204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.588251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.596338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.596410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.612219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.612269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.998 [2024-11-22 14:48:46.629660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.998 [2024-11-22 14:48:46.629709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.999 [2024-11-22 14:48:46.639016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.999 [2024-11-22 14:48:46.639063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.999 [2024-11-22 14:48:46.648168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.999 [2024-11-22 14:48:46.648215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.999 [2024-11-22 14:48:46.657138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.999 [2024-11-22 14:48:46.657185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.670362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.670436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.679068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.679115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.690113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.690162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.698967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.699012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.710120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.710184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.719799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.719867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.729856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.729906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.744247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.744326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.755515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.755569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.764311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.764359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.776280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.776329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.785673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.785738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.795798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.795863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.258 [2024-11-22 14:48:46.805694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.258 [2024-11-22 14:48:46.805743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.815828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.815877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.826028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.826081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.837248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.837319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.853068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.853131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.868434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.868482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.877705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.877770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.890164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.890212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.900011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.900041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.259 [2024-11-22 14:48:46.911494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.259 [2024-11-22 14:48:46.911528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.922022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.922070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.932679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.932712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.942965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.943014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.952863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.952913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.963971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.964020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.980843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.980904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:46.999865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:46.999928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 12800.33 IOPS, 100.00 MiB/s [2024-11-22T14:48:47.183Z] [2024-11-22 14:48:47.014515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.014566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.026326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.026400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.035186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.035236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.047624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.047676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.057784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.057833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.068116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.068166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.078367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.078443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.088935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.088983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.102939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.102987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.111869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.111918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.124048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.124097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.139058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.139109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.147363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.147445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.160540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.160589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.518 [2024-11-22 14:48:47.176517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.518 [2024-11-22 14:48:47.176551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.777 [2024-11-22 14:48:47.187359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.777 [2024-11-22 14:48:47.187433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.777 [2024-11-22 14:48:47.195734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.777 [2024-11-22 14:48:47.195785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.777 [2024-11-22 14:48:47.207274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.777 [2024-11-22 14:48:47.207324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.777 [2024-11-22 14:48:47.218599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.218651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.227316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.227364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.241622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.241674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.250908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.250956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.265886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.265958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.276550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.276582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.290997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.291046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.301472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.301521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.312567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.312623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.329860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.329907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.348477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.348536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.358691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.358741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.368617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.368666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.378444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.378493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.388185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.388238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.401790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.401840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.410569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.410617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.420844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.420894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.778 [2024-11-22 14:48:47.430874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.778 [2024-11-22 14:48:47.430923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.440825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.440874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.450517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.450565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.460529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.460561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.470688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.470754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.480487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.480536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.490032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.490081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.500035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.500086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.510006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.510057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.520344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.520420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.530356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.530446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.540177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.540226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.554264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.554323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.563302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.563350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.577699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.577764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.587599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.587634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.601301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.601351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.610540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.610589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.620306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.620354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.629983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.630031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.639650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.639701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.649126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.649175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.659070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.659120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.668895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.668944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.678825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.678873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.688463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.688512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.038 [2024-11-22 14:48:47.699551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.038 [2024-11-22 14:48:47.699587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.712155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.712208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.721747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.721814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.733693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.733729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.748449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.748481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.764053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.764106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.773562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.773596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.785062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.785114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.795812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.795865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.806814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.806865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.819243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.819295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.828613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.828657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.840445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.840481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.856775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.856828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.866345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.866395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.882036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.882087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.900048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.900100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.911227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.911291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.924869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.924906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.940056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.940095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.297 [2024-11-22 14:48:47.957655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.297 [2024-11-22 14:48:47.957700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:47.968471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:47.968506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:47.979695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:47.979732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:47.991232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:47.991291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.007077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.007146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 12662.50 IOPS, 98.93 MiB/s [2024-11-22T14:48:48.221Z] [2024-11-22 14:48:48.023621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.023662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.033243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.033301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.049725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.049762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.060527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.060562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.075623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.075660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.091946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.091997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.108994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.109045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.124534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.124568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.133849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.133901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.150568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.150605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.166938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.166990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.183082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.183135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.199825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.199879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.556 [2024-11-22 14:48:48.209822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.556 [2024-11-22 14:48:48.209874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.224602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.814 [2024-11-22 14:48:48.224637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.241446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.814 [2024-11-22 14:48:48.241482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.251227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.814 [2024-11-22 14:48:48.251279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.262646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.814 [2024-11-22 14:48:48.262682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.277973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.814 [2024-11-22 14:48:48.278027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.814 [2024-11-22 14:48:48.287740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.287788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.304808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.304844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.319861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.319906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.329453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.329491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.341685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.341720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.356610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.356659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.374556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.374593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.385058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.385111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.395969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.396021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.407041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.407092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.421825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.421876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.437907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.437957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.447902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.447953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.459875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.459927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.815 [2024-11-22 14:48:48.471460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.815 [2024-11-22 14:48:48.471495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.485330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.485394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.501681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.501718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.518788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.518841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.528721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.528789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.544006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.544064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.554516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.554550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.569568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.569604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.585544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.585579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.604853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.604907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.620529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.620566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.637502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.637534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.647425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.647469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.662195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.662246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.678628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.678688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.694703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.694773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.711552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.711588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.073 [2024-11-22 14:48:48.728513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.073 [2024-11-22 14:48:48.728550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.738565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.738599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.749848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.749901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.760968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.761021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.776118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.776170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.795087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.795140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.806150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.806203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.822630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.822667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.833032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.833084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.844912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.844964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.859739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.859819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.875284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.875348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.885106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.885159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.900136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.900190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.910957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.911010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.921981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.922033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.939455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.939490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.956127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.956191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.972894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.972947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.332 [2024-11-22 14:48:48.982673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.332 [2024-11-22 14:48:48.982719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.591 [2024-11-22 14:48:48.994919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.591 [2024-11-22 14:48:48.994971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.591 [2024-11-22 14:48:49.006070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.591 [2024-11-22 14:48:49.006124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.591 12405.00 IOPS, 96.91 MiB/s [2024-11-22T14:48:49.256Z] [2024-11-22 14:48:49.018785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.591 [2024-11-22 14:48:49.018822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.591 00:10:34.591 Latency(us) 00:10:34.591 [2024-11-22T14:48:49.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.591 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:34.591 Nvme1n1 : 5.01 12409.72 96.95 0.00 0.00 10301.69 3961.95 21448.15 00:10:34.591 [2024-11-22T14:48:49.256Z] =================================================================================================================== 00:10:34.591 [2024-11-22T14:48:49.256Z] Total : 12409.72 96.95 0.00 0.00 10301.69 3961.95 21448.15 00:10:34.591 [2024-11-22 14:48:49.026802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.026834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.038794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.038828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.046781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.046810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.058805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.058840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.066796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.066829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.074804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.074847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.082797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.082829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.090798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.090829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.098815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.098844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.106802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.106833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.118850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.118906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.130855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.130912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.138828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.138860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.150860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.150913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.158833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.158883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.166852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.166903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.174845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.174876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.182834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.182882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.194875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.194926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.202854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.202901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.210857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.210888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.218856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.218890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.226853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.226909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.242861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.242906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.592 [2024-11-22 14:48:49.250881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.592 [2024-11-22 14:48:49.250939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.258871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.258928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.266883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.266914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.274870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.274897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.282870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.282928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.290869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.290912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 [2024-11-22 14:48:49.302873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.851 [2024-11-22 14:48:49.302916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.851 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65765) - No such process 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65765 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.851 delay0 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.851 14:48:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:35.108 [2024-11-22 14:48:49.527136] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.665 Initializing NVMe Controllers 00:10:41.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.665 Initialization complete. Launching workers. 00:10:41.665 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 78 00:10:41.665 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 365, failed to submit 33 00:10:41.665 success 223, unsuccessful 142, failed 0 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.665 rmmod nvme_tcp 00:10:41.665 rmmod nvme_fabrics 00:10:41.665 rmmod nvme_keyring 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65622 ']' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65622 ']' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:41.665 killing process with pid 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65622' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65622 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:41.665 14:48:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:41.665 00:10:41.665 real 0m24.710s 00:10:41.665 user 0m40.293s 00:10:41.665 sys 0m6.924s 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.665 ************************************ 00:10:41.665 END TEST nvmf_zcopy 00:10:41.665 ************************************ 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.665 ************************************ 00:10:41.665 START TEST nvmf_nmic 00:10:41.665 ************************************ 00:10:41.665 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.925 * Looking for test storage... 00:10:41.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.925 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.925 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.925 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.925 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.926 --rc genhtml_branch_coverage=1 00:10:41.926 --rc genhtml_function_coverage=1 00:10:41.926 --rc genhtml_legend=1 00:10:41.926 --rc geninfo_all_blocks=1 00:10:41.926 --rc geninfo_unexecuted_blocks=1 00:10:41.926 00:10:41.926 ' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.926 --rc genhtml_branch_coverage=1 00:10:41.926 --rc genhtml_function_coverage=1 00:10:41.926 --rc genhtml_legend=1 00:10:41.926 --rc geninfo_all_blocks=1 00:10:41.926 --rc geninfo_unexecuted_blocks=1 00:10:41.926 00:10:41.926 ' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.926 --rc genhtml_branch_coverage=1 00:10:41.926 --rc genhtml_function_coverage=1 00:10:41.926 --rc genhtml_legend=1 00:10:41.926 --rc geninfo_all_blocks=1 00:10:41.926 --rc geninfo_unexecuted_blocks=1 00:10:41.926 00:10:41.926 ' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.926 --rc genhtml_branch_coverage=1 00:10:41.926 --rc genhtml_function_coverage=1 00:10:41.926 --rc genhtml_legend=1 00:10:41.926 --rc geninfo_all_blocks=1 00:10:41.926 --rc geninfo_unexecuted_blocks=1 00:10:41.926 00:10:41.926 ' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.926 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.926 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.927 Cannot find device "nvmf_init_br" 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.927 Cannot find device "nvmf_init_br2" 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.927 Cannot find device "nvmf_tgt_br" 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.927 Cannot find device "nvmf_tgt_br2" 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:41.927 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.927 Cannot find device "nvmf_init_br" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.186 Cannot find device "nvmf_init_br2" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.186 Cannot find device "nvmf_tgt_br" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.186 Cannot find device "nvmf_tgt_br2" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.186 Cannot find device "nvmf_br" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.186 Cannot find device "nvmf_init_if" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.186 Cannot find device "nvmf_init_if2" 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.186 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:42.445 00:10:42.445 --- 10.0.0.3 ping statistics --- 00:10:42.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.445 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.445 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.445 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:10:42.445 00:10:42.445 --- 10.0.0.4 ping statistics --- 00:10:42.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.445 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:42.445 00:10:42.445 --- 10.0.0.1 ping statistics --- 00:10:42.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.445 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:42.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:42.445 00:10:42.445 --- 10.0.0.2 ping statistics --- 00:10:42.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.445 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66152 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66152 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66152 ']' 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.445 14:48:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.445 [2024-11-22 14:48:56.972159] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:10:42.445 [2024-11-22 14:48:56.972266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.704 [2024-11-22 14:48:57.128904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.704 [2024-11-22 14:48:57.211700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.704 [2024-11-22 14:48:57.211802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.704 [2024-11-22 14:48:57.211817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.704 [2024-11-22 14:48:57.211828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.704 [2024-11-22 14:48:57.211837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.704 [2024-11-22 14:48:57.213414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.704 [2024-11-22 14:48:57.213516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.704 [2024-11-22 14:48:57.213639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.704 [2024-11-22 14:48:57.213649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.704 [2024-11-22 14:48:57.292517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.638 14:48:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.638 14:48:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:43.638 14:48:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.638 14:48:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.638 14:48:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 [2024-11-22 14:48:58.036353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 Malloc0 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 [2024-11-22 14:48:58.113240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 test case1: single bdev can't be used in multiple subsystems 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.638 [2024-11-22 14:48:58.137051] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:43.638 [2024-11-22 14:48:58.137088] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:43.638 [2024-11-22 14:48:58.137100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.638 request: 00:10:43.638 { 00:10:43.638 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:43.638 "namespace": { 00:10:43.638 "bdev_name": "Malloc0", 00:10:43.638 "no_auto_visible": false 00:10:43.638 }, 00:10:43.638 "method": "nvmf_subsystem_add_ns", 00:10:43.638 "req_id": 1 00:10:43.638 } 00:10:43.638 Got JSON-RPC error response 00:10:43.638 response: 00:10:43.638 { 00:10:43.638 "code": -32602, 00:10:43.638 "message": "Invalid parameters" 00:10:43.638 } 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:43.638 Adding namespace failed - expected result. 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:43.638 test case2: host connect to nvmf target in multiple paths 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.638 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.639 [2024-11-22 14:48:58.150902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:43.639 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.639 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:43.639 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:43.898 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.898 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:43.898 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.898 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:43.898 14:48:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:45.801 14:49:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:46.060 [global] 00:10:46.060 thread=1 00:10:46.060 invalidate=1 00:10:46.060 rw=write 00:10:46.060 time_based=1 00:10:46.060 runtime=1 00:10:46.060 ioengine=libaio 00:10:46.060 direct=1 00:10:46.060 bs=4096 00:10:46.060 iodepth=1 00:10:46.060 norandommap=0 00:10:46.060 numjobs=1 00:10:46.060 00:10:46.060 verify_dump=1 00:10:46.060 verify_backlog=512 00:10:46.060 verify_state_save=0 00:10:46.060 do_verify=1 00:10:46.060 verify=crc32c-intel 00:10:46.060 [job0] 00:10:46.060 filename=/dev/nvme0n1 00:10:46.060 Could not set queue depth (nvme0n1) 00:10:46.060 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.060 fio-3.35 00:10:46.060 Starting 1 thread 00:10:47.465 00:10:47.465 job0: (groupid=0, jobs=1): err= 0: pid=66240: Fri Nov 22 14:49:01 2024 00:10:47.465 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:47.465 slat (nsec): min=11884, max=73253, avg=15547.32, stdev=6165.97 00:10:47.465 clat (usec): min=131, max=779, avg=175.24, stdev=23.45 00:10:47.465 lat (usec): min=144, max=792, avg=190.79, stdev=24.32 00:10:47.465 clat percentiles (usec): 00:10:47.465 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:10:47.465 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:47.465 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 215], 00:10:47.465 | 99.00th=[ 237], 99.50th=[ 245], 99.90th=[ 269], 99.95th=[ 297], 00:10:47.465 | 99.99th=[ 783] 00:10:47.465 write: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1001msec); 0 zone resets 00:10:47.465 slat (usec): min=14, max=108, avg=23.05, stdev= 7.78 00:10:47.465 clat (usec): min=67, max=336, avg=104.69, stdev=16.23 00:10:47.465 lat (usec): min=98, max=444, avg=127.74, stdev=18.92 00:10:47.465 clat percentiles (usec): 00:10:47.465 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 92], 00:10:47.465 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:10:47.465 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 126], 95.00th=[ 135], 00:10:47.465 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 196], 99.95th=[ 221], 00:10:47.465 | 99.99th=[ 338] 00:10:47.465 bw ( KiB/s): min=12288, max=12288, per=97.19%, avg=12288.00, stdev= 0.00, samples=1 00:10:47.465 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:47.465 lat (usec) : 100=23.77%, 250=76.07%, 500=0.14%, 1000=0.02% 00:10:47.465 cpu : usr=2.90%, sys=8.60%, ctx=6236, majf=0, minf=5 00:10:47.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.466 issued rwts: total=3072,3164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.466 00:10:47.466 Run status group 0 (all jobs): 00:10:47.466 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:47.466 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:10:47.466 00:10:47.466 Disk stats (read/write): 00:10:47.466 nvme0n1: ios=2673/3072, merge=0/0, ticks=501/366, in_queue=867, util=91.48% 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.466 rmmod nvme_tcp 00:10:47.466 rmmod nvme_fabrics 00:10:47.466 rmmod nvme_keyring 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66152 ']' 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66152 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66152 ']' 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66152 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66152 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.466 killing process with pid 66152 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66152' 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66152 00:10:47.466 14:49:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66152 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:47.726 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:47.985 00:10:47.985 real 0m6.272s 00:10:47.985 user 0m19.308s 00:10:47.985 sys 0m2.208s 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.985 ************************************ 00:10:47.985 END TEST nvmf_nmic 00:10:47.985 ************************************ 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.985 ************************************ 00:10:47.985 START TEST nvmf_fio_target 00:10:47.985 ************************************ 00:10:47.985 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.244 * Looking for test storage... 00:10:48.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.244 --rc genhtml_branch_coverage=1 00:10:48.244 --rc genhtml_function_coverage=1 00:10:48.244 --rc genhtml_legend=1 00:10:48.244 --rc geninfo_all_blocks=1 00:10:48.244 --rc geninfo_unexecuted_blocks=1 00:10:48.244 00:10:48.244 ' 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.244 --rc genhtml_branch_coverage=1 00:10:48.244 --rc genhtml_function_coverage=1 00:10:48.244 --rc genhtml_legend=1 00:10:48.244 --rc geninfo_all_blocks=1 00:10:48.244 --rc geninfo_unexecuted_blocks=1 00:10:48.244 00:10:48.244 ' 00:10:48.244 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.244 --rc genhtml_branch_coverage=1 00:10:48.244 --rc genhtml_function_coverage=1 00:10:48.244 --rc genhtml_legend=1 00:10:48.244 --rc geninfo_all_blocks=1 00:10:48.244 --rc geninfo_unexecuted_blocks=1 00:10:48.244 00:10:48.244 ' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.245 --rc genhtml_branch_coverage=1 00:10:48.245 --rc genhtml_function_coverage=1 00:10:48.245 --rc genhtml_legend=1 00:10:48.245 --rc geninfo_all_blocks=1 00:10:48.245 --rc geninfo_unexecuted_blocks=1 00:10:48.245 00:10:48.245 ' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:48.245 Cannot find device "nvmf_init_br" 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:48.245 Cannot find device "nvmf_init_br2" 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:48.245 Cannot find device "nvmf_tgt_br" 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.245 Cannot find device "nvmf_tgt_br2" 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:48.245 Cannot find device "nvmf_init_br" 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:48.245 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:48.505 Cannot find device "nvmf_init_br2" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:48.505 Cannot find device "nvmf_tgt_br" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:48.505 Cannot find device "nvmf_tgt_br2" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:48.505 Cannot find device "nvmf_br" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:48.505 Cannot find device "nvmf_init_if" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:48.505 Cannot find device "nvmf_init_if2" 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:48.505 14:49:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:48.505 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:48.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:48.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.155 ms 00:10:48.764 00:10:48.764 --- 10.0.0.3 ping statistics --- 00:10:48.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.764 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:48.764 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:48.764 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:48.764 00:10:48.764 --- 10.0.0.4 ping statistics --- 00:10:48.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.764 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:48.764 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:48.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:48.764 00:10:48.764 --- 10.0.0.1 ping statistics --- 00:10:48.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.764 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:48.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:48.765 00:10:48.765 --- 10.0.0.2 ping statistics --- 00:10:48.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.765 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66477 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66477 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66477 ']' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.765 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.765 [2024-11-22 14:49:03.298195] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:10:48.765 [2024-11-22 14:49:03.298288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.023 [2024-11-22 14:49:03.446130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.023 [2024-11-22 14:49:03.510027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.023 [2024-11-22 14:49:03.510102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.023 [2024-11-22 14:49:03.510113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.023 [2024-11-22 14:49:03.510121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.023 [2024-11-22 14:49:03.510128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.023 [2024-11-22 14:49:03.511581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.023 [2024-11-22 14:49:03.511703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.023 [2024-11-22 14:49:03.511841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.023 [2024-11-22 14:49:03.511844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.023 [2024-11-22 14:49:03.587937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.023 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.023 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:49.023 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.023 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.024 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.282 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.282 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:49.542 [2024-11-22 14:49:03.968312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.542 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:49.801 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:49.801 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.061 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:50.061 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.320 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:50.320 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.579 14:49:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:50.579 14:49:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:50.838 14:49:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.097 14:49:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:51.097 14:49:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.665 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:51.665 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.924 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:51.924 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:52.183 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.442 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:52.442 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:52.701 14:49:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:52.701 14:49:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:52.701 14:49:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:53.269 [2024-11-22 14:49:07.626028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:53.269 14:49:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:53.269 14:49:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:53.529 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:53.788 14:49:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:55.690 14:49:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:55.690 [global] 00:10:55.690 thread=1 00:10:55.690 invalidate=1 00:10:55.690 rw=write 00:10:55.690 time_based=1 00:10:55.690 runtime=1 00:10:55.690 ioengine=libaio 00:10:55.690 direct=1 00:10:55.690 bs=4096 00:10:55.690 iodepth=1 00:10:55.690 norandommap=0 00:10:55.690 numjobs=1 00:10:55.690 00:10:55.690 verify_dump=1 00:10:55.690 verify_backlog=512 00:10:55.690 verify_state_save=0 00:10:55.690 do_verify=1 00:10:55.690 verify=crc32c-intel 00:10:55.690 [job0] 00:10:55.690 filename=/dev/nvme0n1 00:10:55.690 [job1] 00:10:55.690 filename=/dev/nvme0n2 00:10:55.690 [job2] 00:10:55.690 filename=/dev/nvme0n3 00:10:55.690 [job3] 00:10:55.690 filename=/dev/nvme0n4 00:10:55.949 Could not set queue depth (nvme0n1) 00:10:55.949 Could not set queue depth (nvme0n2) 00:10:55.949 Could not set queue depth (nvme0n3) 00:10:55.949 Could not set queue depth (nvme0n4) 00:10:55.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.949 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.950 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.950 fio-3.35 00:10:55.950 Starting 4 threads 00:10:57.325 00:10:57.325 job0: (groupid=0, jobs=1): err= 0: pid=66654: Fri Nov 22 14:49:11 2024 00:10:57.325 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:57.325 slat (nsec): min=10644, max=52950, avg=13056.52, stdev=3456.12 00:10:57.325 clat (usec): min=127, max=261, avg=159.13, stdev=16.01 00:10:57.325 lat (usec): min=139, max=272, avg=172.19, stdev=16.57 00:10:57.325 clat percentiles (usec): 00:10:57.325 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:10:57.325 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:10:57.325 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:10:57.325 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 219], 99.95th=[ 221], 00:10:57.325 | 99.99th=[ 262] 00:10:57.325 write: IOPS=3293, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:10:57.325 slat (usec): min=12, max=159, avg=19.48, stdev= 5.76 00:10:57.325 clat (usec): min=90, max=572, avg=120.68, stdev=16.64 00:10:57.325 lat (usec): min=107, max=596, avg=140.16, stdev=17.92 00:10:57.325 clat percentiles (usec): 00:10:57.325 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 109], 00:10:57.325 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 123], 00:10:57.325 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:10:57.325 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 186], 00:10:57.325 | 99.99th=[ 570] 00:10:57.325 bw ( KiB/s): min=12440, max=12440, per=29.80%, avg=12440.00, stdev= 0.00, samples=1 00:10:57.325 iops : min= 3110, max= 3110, avg=3110.00, stdev= 0.00, samples=1 00:10:57.325 lat (usec) : 100=2.57%, 250=97.39%, 500=0.02%, 750=0.02% 00:10:57.325 cpu : usr=2.40%, sys=8.10%, ctx=6370, majf=0, minf=5 00:10:57.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.325 issued rwts: total=3072,3297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.325 job1: (groupid=0, jobs=1): err= 0: pid=66655: Fri Nov 22 14:49:11 2024 00:10:57.325 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:57.325 slat (nsec): min=10193, max=57500, avg=13245.56, stdev=3906.34 00:10:57.325 clat (usec): min=128, max=462, avg=159.90, stdev=16.80 00:10:57.325 lat (usec): min=139, max=474, avg=173.15, stdev=17.58 00:10:57.325 clat percentiles (usec): 00:10:57.325 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:10:57.325 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:57.325 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:10:57.325 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 223], 99.95th=[ 285], 00:10:57.326 | 99.99th=[ 461] 00:10:57.326 write: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(12.8MiB/1001msec); 0 zone resets 00:10:57.326 slat (nsec): min=12637, max=99467, avg=19506.35, stdev=5151.72 00:10:57.326 clat (usec): min=87, max=1149, avg=119.89, stdev=25.39 00:10:57.326 lat (usec): min=103, max=1168, avg=139.39, stdev=26.14 00:10:57.326 clat percentiles (usec): 00:10:57.326 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:10:57.326 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:10:57.326 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 139], 95.00th=[ 147], 00:10:57.326 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 188], 99.95th=[ 676], 00:10:57.326 | 99.99th=[ 1156] 00:10:57.326 bw ( KiB/s): min=12792, max=12792, per=30.64%, avg=12792.00, stdev= 0.00, samples=1 00:10:57.326 iops : min= 3198, max= 3198, avg=3198.00, stdev= 0.00, samples=1 00:10:57.326 lat (usec) : 100=3.52%, 250=96.40%, 500=0.05%, 750=0.02% 00:10:57.326 lat (msec) : 2=0.02% 00:10:57.326 cpu : usr=2.10%, sys=8.50%, ctx=6365, majf=0, minf=9 00:10:57.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 issued rwts: total=3072,3289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.326 job2: (groupid=0, jobs=1): err= 0: pid=66656: Fri Nov 22 14:49:11 2024 00:10:57.326 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:57.326 slat (nsec): min=14982, max=64503, avg=20563.60, stdev=6109.95 00:10:57.326 clat (usec): min=166, max=4030, avg=304.06, stdev=112.04 00:10:57.326 lat (usec): min=181, max=4050, avg=324.62, stdev=112.96 00:10:57.326 clat percentiles (usec): 00:10:57.326 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:10:57.326 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:10:57.326 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 453], 00:10:57.326 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 1074], 99.95th=[ 4015], 00:10:57.326 | 99.99th=[ 4015] 00:10:57.326 write: IOPS=1947, BW=7788KiB/s (7975kB/s)(7796KiB/1001msec); 0 zone resets 00:10:57.326 slat (usec): min=22, max=158, avg=30.28, stdev= 8.30 00:10:57.326 clat (usec): min=113, max=984, avg=223.21, stdev=37.75 00:10:57.326 lat (usec): min=137, max=1011, avg=253.49, stdev=38.44 00:10:57.326 clat percentiles (usec): 00:10:57.326 | 1.00th=[ 128], 5.00th=[ 145], 10.00th=[ 178], 20.00th=[ 204], 00:10:57.326 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:10:57.326 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 269], 00:10:57.326 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 988], 00:10:57.326 | 99.99th=[ 988] 00:10:57.326 bw ( KiB/s): min= 8192, max= 8192, per=19.62%, avg=8192.00, stdev= 0.00, samples=1 00:10:57.326 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:57.326 lat (usec) : 250=47.49%, 500=51.16%, 750=1.26%, 1000=0.03% 00:10:57.326 lat (msec) : 2=0.03%, 10=0.03% 00:10:57.326 cpu : usr=1.80%, sys=7.00%, ctx=3485, majf=0, minf=7 00:10:57.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 issued rwts: total=1536,1949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.326 job3: (groupid=0, jobs=1): err= 0: pid=66657: Fri Nov 22 14:49:11 2024 00:10:57.326 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:57.326 slat (nsec): min=13917, max=50119, avg=17458.99, stdev=4319.33 00:10:57.326 clat (usec): min=162, max=928, avg=296.69, stdev=46.23 00:10:57.326 lat (usec): min=176, max=942, avg=314.15, stdev=46.98 00:10:57.326 clat percentiles (usec): 00:10:57.326 | 1.00th=[ 194], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:10:57.326 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:10:57.326 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 347], 00:10:57.326 | 99.00th=[ 486], 99.50th=[ 537], 99.90th=[ 857], 99.95th=[ 930], 00:10:57.326 | 99.99th=[ 930] 00:10:57.326 write: IOPS=1911, BW=7644KiB/s (7828kB/s)(7652KiB/1001msec); 0 zone resets 00:10:57.326 slat (usec): min=19, max=121, avg=30.01, stdev= 8.71 00:10:57.326 clat (usec): min=113, max=983, avg=237.11, stdev=62.18 00:10:57.326 lat (usec): min=138, max=1013, avg=267.13, stdev=66.40 00:10:57.326 clat percentiles (usec): 00:10:57.326 | 1.00th=[ 123], 5.00th=[ 139], 10.00th=[ 190], 20.00th=[ 212], 00:10:57.326 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:57.326 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 379], 00:10:57.326 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 930], 99.95th=[ 979], 00:10:57.326 | 99.99th=[ 979] 00:10:57.326 bw ( KiB/s): min= 8192, max= 8192, per=19.62%, avg=8192.00, stdev= 0.00, samples=1 00:10:57.326 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:57.326 lat (usec) : 250=43.93%, 500=55.61%, 750=0.29%, 1000=0.17% 00:10:57.326 cpu : usr=1.60%, sys=6.80%, ctx=3449, majf=0, minf=17 00:10:57.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.326 issued rwts: total=1536,1913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.326 00:10:57.326 Run status group 0 (all jobs): 00:10:57.326 READ: bw=36.0MiB/s (37.7MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:10:57.326 WRITE: bw=40.8MiB/s (42.8MB/s), 7644KiB/s-12.9MiB/s (7828kB/s-13.5MB/s), io=40.8MiB (42.8MB), run=1001-1001msec 00:10:57.326 00:10:57.326 Disk stats (read/write): 00:10:57.326 nvme0n1: ios=2610/2839, merge=0/0, ticks=457/366, in_queue=823, util=87.27% 00:10:57.326 nvme0n2: ios=2607/2861, merge=0/0, ticks=463/363, in_queue=826, util=89.04% 00:10:57.326 nvme0n3: ios=1396/1536, merge=0/0, ticks=437/362, in_queue=799, util=88.94% 00:10:57.326 nvme0n4: ios=1377/1536, merge=0/0, ticks=406/383, in_queue=789, util=89.68% 00:10:57.326 14:49:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:57.326 [global] 00:10:57.326 thread=1 00:10:57.326 invalidate=1 00:10:57.326 rw=randwrite 00:10:57.326 time_based=1 00:10:57.326 runtime=1 00:10:57.326 ioengine=libaio 00:10:57.326 direct=1 00:10:57.326 bs=4096 00:10:57.326 iodepth=1 00:10:57.326 norandommap=0 00:10:57.326 numjobs=1 00:10:57.326 00:10:57.326 verify_dump=1 00:10:57.326 verify_backlog=512 00:10:57.326 verify_state_save=0 00:10:57.326 do_verify=1 00:10:57.326 verify=crc32c-intel 00:10:57.326 [job0] 00:10:57.326 filename=/dev/nvme0n1 00:10:57.326 [job1] 00:10:57.326 filename=/dev/nvme0n2 00:10:57.326 [job2] 00:10:57.326 filename=/dev/nvme0n3 00:10:57.326 [job3] 00:10:57.326 filename=/dev/nvme0n4 00:10:57.326 Could not set queue depth (nvme0n1) 00:10:57.326 Could not set queue depth (nvme0n2) 00:10:57.326 Could not set queue depth (nvme0n3) 00:10:57.326 Could not set queue depth (nvme0n4) 00:10:57.326 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.326 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.326 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.326 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.326 fio-3.35 00:10:57.326 Starting 4 threads 00:10:58.735 00:10:58.735 job0: (groupid=0, jobs=1): err= 0: pid=66720: Fri Nov 22 14:49:13 2024 00:10:58.735 read: IOPS=3085, BW=12.1MiB/s (12.6MB/s)(12.1MiB/1001msec) 00:10:58.735 slat (nsec): min=10396, max=32241, avg=11487.61, stdev=1688.79 00:10:58.735 clat (usec): min=129, max=2362, avg=155.03, stdev=42.64 00:10:58.735 lat (usec): min=140, max=2373, avg=166.52, stdev=42.68 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 145], 00:10:58.735 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:10:58.735 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:10:58.735 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 227], 99.95th=[ 652], 00:10:58.735 | 99.99th=[ 2376] 00:10:58.735 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:58.735 slat (usec): min=13, max=105, avg=18.03, stdev= 3.85 00:10:58.735 clat (usec): min=90, max=194, avg=114.94, stdev=12.24 00:10:58.735 lat (usec): min=107, max=300, avg=132.97, stdev=13.23 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 105], 00:10:58.735 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:10:58.735 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 139], 00:10:58.735 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 176], 00:10:58.735 | 99.99th=[ 194] 00:10:58.735 bw ( KiB/s): min=14488, max=14488, per=34.02%, avg=14488.00, stdev= 0.00, samples=1 00:10:58.735 iops : min= 3622, max= 3622, avg=3622.00, stdev= 0.00, samples=1 00:10:58.735 lat (usec) : 100=4.92%, 250=95.04%, 500=0.01%, 750=0.01% 00:10:58.735 lat (msec) : 4=0.01% 00:10:58.735 cpu : usr=2.10%, sys=8.10%, ctx=6675, majf=0, minf=11 00:10:58.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 issued rwts: total=3089,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.735 job1: (groupid=0, jobs=1): err= 0: pid=66721: Fri Nov 22 14:49:13 2024 00:10:58.735 read: IOPS=3025, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:10:58.735 slat (nsec): min=11095, max=99166, avg=13458.31, stdev=2522.08 00:10:58.735 clat (usec): min=112, max=310, avg=166.30, stdev=13.15 00:10:58.735 lat (usec): min=148, max=322, avg=179.76, stdev=13.39 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:10:58.735 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:10:58.735 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:10:58.735 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 245], 00:10:58.735 | 99.99th=[ 310] 00:10:58.735 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:58.735 slat (nsec): min=13078, max=90472, avg=20660.56, stdev=3969.00 00:10:58.735 clat (usec): min=95, max=753, avg=124.60, stdev=17.67 00:10:58.735 lat (usec): min=112, max=783, avg=145.26, stdev=18.35 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 116], 00:10:58.735 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:10:58.735 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:10:58.735 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 273], 99.95th=[ 388], 00:10:58.735 | 99.99th=[ 750] 00:10:58.735 bw ( KiB/s): min=12288, max=12288, per=28.86%, avg=12288.00, stdev= 0.00, samples=1 00:10:58.735 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:58.735 lat (usec) : 100=0.21%, 250=99.69%, 500=0.08%, 1000=0.02% 00:10:58.735 cpu : usr=2.40%, sys=8.00%, ctx=6103, majf=0, minf=11 00:10:58.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 issued rwts: total=3029,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.735 job2: (groupid=0, jobs=1): err= 0: pid=66722: Fri Nov 22 14:49:13 2024 00:10:58.735 read: IOPS=1572, BW=6290KiB/s (6441kB/s)(6296KiB/1001msec) 00:10:58.735 slat (nsec): min=13507, max=64333, avg=17834.01, stdev=5599.58 00:10:58.735 clat (usec): min=241, max=1199, avg=305.95, stdev=74.87 00:10:58.735 lat (usec): min=258, max=1243, avg=323.78, stdev=79.07 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:58.735 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:58.735 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 388], 95.00th=[ 515], 00:10:58.735 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 775], 99.95th=[ 1205], 00:10:58.735 | 99.99th=[ 1205] 00:10:58.735 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:58.735 slat (usec): min=18, max=138, avg=25.19, stdev= 5.91 00:10:58.735 clat (usec): min=106, max=1811, avg=210.69, stdev=57.08 00:10:58.735 lat (usec): min=127, max=1835, avg=235.88, stdev=57.69 00:10:58.735 clat percentiles (usec): 00:10:58.735 | 1.00th=[ 117], 5.00th=[ 131], 10.00th=[ 141], 20.00th=[ 180], 00:10:58.735 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:10:58.735 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:10:58.735 | 99.00th=[ 302], 99.50th=[ 424], 99.90th=[ 619], 99.95th=[ 627], 00:10:58.735 | 99.99th=[ 1811] 00:10:58.735 bw ( KiB/s): min= 8192, max= 8192, per=19.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.735 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.735 lat (usec) : 250=51.79%, 500=45.06%, 750=3.06%, 1000=0.03% 00:10:58.735 lat (msec) : 2=0.06% 00:10:58.735 cpu : usr=1.80%, sys=6.20%, ctx=3622, majf=0, minf=13 00:10:58.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.735 issued rwts: total=1574,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.735 job3: (groupid=0, jobs=1): err= 0: pid=66723: Fri Nov 22 14:49:13 2024 00:10:58.735 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:58.735 slat (nsec): min=13908, max=46820, avg=16727.82, stdev=4149.62 00:10:58.735 clat (usec): min=176, max=2195, avg=297.97, stdev=71.88 00:10:58.735 lat (usec): min=214, max=2215, avg=314.70, stdev=73.62 00:10:58.736 clat percentiles (usec): 00:10:58.736 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:10:58.736 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:10:58.736 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 420], 00:10:58.736 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 1172], 99.95th=[ 2212], 00:10:58.736 | 99.99th=[ 2212] 00:10:58.736 write: IOPS=1950, BW=7800KiB/s (7987kB/s)(7808KiB/1001msec); 0 zone resets 00:10:58.736 slat (usec): min=20, max=134, avg=28.27, stdev= 8.52 00:10:58.736 clat (usec): min=114, max=2576, avg=232.95, stdev=87.67 00:10:58.736 lat (usec): min=138, max=2616, avg=261.22, stdev=91.05 00:10:58.736 clat percentiles (usec): 00:10:58.736 | 1.00th=[ 125], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 198], 00:10:58.736 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:10:58.736 | 70.00th=[ 237], 80.00th=[ 253], 90.00th=[ 343], 95.00th=[ 383], 00:10:58.736 | 99.00th=[ 433], 99.50th=[ 478], 99.90th=[ 857], 99.95th=[ 2573], 00:10:58.736 | 99.99th=[ 2573] 00:10:58.736 bw ( KiB/s): min= 8192, max= 8192, per=19.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:58.736 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:58.736 lat (usec) : 250=44.52%, 500=54.62%, 750=0.72%, 1000=0.06% 00:10:58.736 lat (msec) : 2=0.03%, 4=0.06% 00:10:58.736 cpu : usr=2.00%, sys=6.10%, ctx=3496, majf=0, minf=13 00:10:58.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.736 issued rwts: total=1536,1952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.736 00:10:58.736 Run status group 0 (all jobs): 00:10:58.736 READ: bw=36.0MiB/s (37.8MB/s), 6138KiB/s-12.1MiB/s (6285kB/s-12.6MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:10:58.736 WRITE: bw=41.6MiB/s (43.6MB/s), 7800KiB/s-14.0MiB/s (7987kB/s-14.7MB/s), io=41.6MiB (43.6MB), run=1001-1001msec 00:10:58.736 00:10:58.736 Disk stats (read/write): 00:10:58.736 nvme0n1: ios=2803/3072, merge=0/0, ticks=448/367, in_queue=815, util=88.78% 00:10:58.736 nvme0n2: ios=2609/2762, merge=0/0, ticks=462/376, in_queue=838, util=89.91% 00:10:58.736 nvme0n3: ios=1563/1621, merge=0/0, ticks=536/353, in_queue=889, util=90.27% 00:10:58.736 nvme0n4: ios=1511/1536, merge=0/0, ticks=496/368, in_queue=864, util=90.42% 00:10:58.736 14:49:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:58.736 [global] 00:10:58.736 thread=1 00:10:58.736 invalidate=1 00:10:58.736 rw=write 00:10:58.736 time_based=1 00:10:58.736 runtime=1 00:10:58.736 ioengine=libaio 00:10:58.736 direct=1 00:10:58.736 bs=4096 00:10:58.736 iodepth=128 00:10:58.736 norandommap=0 00:10:58.736 numjobs=1 00:10:58.736 00:10:58.736 verify_dump=1 00:10:58.736 verify_backlog=512 00:10:58.736 verify_state_save=0 00:10:58.736 do_verify=1 00:10:58.736 verify=crc32c-intel 00:10:58.736 [job0] 00:10:58.736 filename=/dev/nvme0n1 00:10:58.736 [job1] 00:10:58.736 filename=/dev/nvme0n2 00:10:58.736 [job2] 00:10:58.736 filename=/dev/nvme0n3 00:10:58.736 [job3] 00:10:58.736 filename=/dev/nvme0n4 00:10:58.736 Could not set queue depth (nvme0n1) 00:10:58.736 Could not set queue depth (nvme0n2) 00:10:58.736 Could not set queue depth (nvme0n3) 00:10:58.736 Could not set queue depth (nvme0n4) 00:10:58.736 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.736 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.736 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.736 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.736 fio-3.35 00:10:58.736 Starting 4 threads 00:11:00.115 00:11:00.115 job0: (groupid=0, jobs=1): err= 0: pid=66779: Fri Nov 22 14:49:14 2024 00:11:00.115 read: IOPS=5347, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1003msec) 00:11:00.115 slat (usec): min=5, max=5448, avg=89.97, stdev=427.08 00:11:00.115 clat (usec): min=908, max=18116, avg=11731.22, stdev=1432.60 00:11:00.115 lat (usec): min=2990, max=18155, avg=11821.19, stdev=1449.76 00:11:00.115 clat percentiles (usec): 00:11:00.115 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11207], 00:11:00.115 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:11:00.115 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13829], 00:11:00.115 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:11:00.115 | 99.99th=[18220] 00:11:00.115 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:00.115 slat (usec): min=10, max=4691, avg=84.46, stdev=445.64 00:11:00.115 clat (usec): min=5288, max=18063, avg=11343.53, stdev=1299.14 00:11:00.115 lat (usec): min=5308, max=18300, avg=11427.99, stdev=1363.15 00:11:00.115 clat percentiles (usec): 00:11:00.115 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10683], 00:11:00.115 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:11:00.115 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12780], 95.00th=[13173], 00:11:00.115 | 99.00th=[16188], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:11:00.115 | 99.99th=[17957] 00:11:00.115 bw ( KiB/s): min=22328, max=22773, per=35.62%, avg=22550.50, stdev=314.66, samples=2 00:11:00.115 iops : min= 5582, max= 5693, avg=5637.50, stdev=78.49, samples=2 00:11:00.115 lat (usec) : 1000=0.01% 00:11:00.115 lat (msec) : 4=0.34%, 10=5.86%, 20=93.80% 00:11:00.115 cpu : usr=4.99%, sys=14.47%, ctx=420, majf=0, minf=1 00:11:00.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:00.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.115 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.115 job1: (groupid=0, jobs=1): err= 0: pid=66780: Fri Nov 22 14:49:14 2024 00:11:00.115 read: IOPS=1782, BW=7131KiB/s (7302kB/s)(7152KiB/1003msec) 00:11:00.115 slat (usec): min=5, max=9821, avg=220.46, stdev=941.65 00:11:00.115 clat (usec): min=1401, max=48178, avg=25524.01, stdev=6213.60 00:11:00.115 lat (usec): min=6181, max=48191, avg=25744.47, stdev=6292.37 00:11:00.115 clat percentiles (usec): 00:11:00.115 | 1.00th=[ 6390], 5.00th=[19006], 10.00th=[20579], 20.00th=[21365], 00:11:00.115 | 30.00th=[21365], 40.00th=[21890], 50.00th=[24249], 60.00th=[27395], 00:11:00.115 | 70.00th=[29754], 80.00th=[31327], 90.00th=[32113], 95.00th=[33424], 00:11:00.115 | 99.00th=[43254], 99.50th=[46924], 99.90th=[47449], 99.95th=[47973], 00:11:00.115 | 99.99th=[47973] 00:11:00.115 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:11:00.115 slat (usec): min=13, max=7207, avg=288.98, stdev=971.84 00:11:00.115 clat (usec): min=19949, max=67284, avg=39399.19, stdev=12860.73 00:11:00.115 lat (usec): min=19970, max=67306, avg=39688.17, stdev=12937.10 00:11:00.115 clat percentiles (usec): 00:11:00.115 | 1.00th=[20317], 5.00th=[20579], 10.00th=[20841], 20.00th=[21627], 00:11:00.115 | 30.00th=[33817], 40.00th=[38536], 50.00th=[39584], 60.00th=[41157], 00:11:00.115 | 70.00th=[45876], 80.00th=[51119], 90.00th=[57934], 95.00th=[61080], 00:11:00.115 | 99.00th=[64750], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:11:00.115 | 99.99th=[67634] 00:11:00.115 bw ( KiB/s): min= 8168, max= 8216, per=12.94%, avg=8192.00, stdev=33.94, samples=2 00:11:00.115 iops : min= 2042, max= 2054, avg=2048.00, stdev= 8.49, samples=2 00:11:00.115 lat (msec) : 2=0.03%, 10=1.09%, 20=2.48%, 50=84.70%, 100=11.70% 00:11:00.115 cpu : usr=2.10%, sys=4.89%, ctx=285, majf=0, minf=8 00:11:00.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:00.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.115 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.115 job2: (groupid=0, jobs=1): err= 0: pid=66781: Fri Nov 22 14:49:14 2024 00:11:00.115 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:00.115 slat (usec): min=6, max=6969, avg=167.16, stdev=860.54 00:11:00.115 clat (usec): min=6325, max=28960, avg=21355.34, stdev=3904.61 00:11:00.115 lat (usec): min=6339, max=28975, avg=21522.50, stdev=3843.89 00:11:00.115 clat percentiles (usec): 00:11:00.115 | 1.00th=[ 6980], 5.00th=[16909], 10.00th=[18220], 20.00th=[19268], 00:11:00.116 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:11:00.116 | 70.00th=[22414], 80.00th=[25297], 90.00th=[28181], 95.00th=[28443], 00:11:00.116 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:11:00.116 | 99.99th=[28967] 00:11:00.116 write: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:00.116 slat (usec): min=9, max=6584, avg=149.71, stdev=717.14 00:11:00.116 clat (usec): min=1010, max=28840, avg=19640.89, stdev=3996.69 00:11:00.116 lat (usec): min=6319, max=28866, avg=19790.60, stdev=3947.88 00:11:00.116 clat percentiles (usec): 00:11:00.116 | 1.00th=[13304], 5.00th=[15664], 10.00th=[15926], 20.00th=[16188], 00:11:00.116 | 30.00th=[16450], 40.00th=[16909], 50.00th=[18220], 60.00th=[20579], 00:11:00.116 | 70.00th=[21103], 80.00th=[22676], 90.00th=[27395], 95.00th=[27657], 00:11:00.116 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:11:00.116 | 99.99th=[28967] 00:11:00.116 bw ( KiB/s): min=12288, max=12312, per=19.43%, avg=12300.00, stdev=16.97, samples=2 00:11:00.116 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:00.116 lat (msec) : 2=0.02%, 10=0.52%, 20=53.72%, 50=45.74% 00:11:00.116 cpu : usr=3.69%, sys=8.88%, ctx=193, majf=0, minf=5 00:11:00.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:00.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.116 issued rwts: total=3072,3073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.116 job3: (groupid=0, jobs=1): err= 0: pid=66782: Fri Nov 22 14:49:14 2024 00:11:00.116 read: IOPS=4640, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1002msec) 00:11:00.116 slat (usec): min=8, max=5251, avg=100.03, stdev=397.88 00:11:00.116 clat (usec): min=561, max=18289, avg=13208.00, stdev=1162.03 00:11:00.116 lat (usec): min=4519, max=20365, avg=13308.03, stdev=1199.20 00:11:00.116 clat percentiles (usec): 00:11:00.116 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12387], 20.00th=[12649], 00:11:00.116 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:11:00.116 | 70.00th=[13435], 80.00th=[13566], 90.00th=[14746], 95.00th=[15139], 00:11:00.116 | 99.00th=[16319], 99.50th=[16712], 99.90th=[16909], 99.95th=[16909], 00:11:00.116 | 99.99th=[18220] 00:11:00.116 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:00.116 slat (usec): min=9, max=4182, avg=96.91, stdev=453.79 00:11:00.116 clat (usec): min=8673, max=17353, avg=12728.15, stdev=917.43 00:11:00.116 lat (usec): min=8689, max=17373, avg=12825.05, stdev=1008.25 00:11:00.116 clat percentiles (usec): 00:11:00.116 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:11:00.116 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:11:00.116 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14615], 00:11:00.116 | 99.00th=[15795], 99.50th=[16319], 99.90th=[17171], 99.95th=[17433], 00:11:00.116 | 99.99th=[17433] 00:11:00.116 bw ( KiB/s): min=19792, max=20521, per=31.84%, avg=20156.50, stdev=515.48, samples=2 00:11:00.116 iops : min= 4948, max= 5130, avg=5039.00, stdev=128.69, samples=2 00:11:00.116 lat (usec) : 750=0.01% 00:11:00.116 lat (msec) : 10=0.63%, 20=99.36% 00:11:00.116 cpu : usr=4.20%, sys=14.09%, ctx=399, majf=0, minf=1 00:11:00.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:00.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.116 issued rwts: total=4650,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.116 00:11:00.116 Run status group 0 (all jobs): 00:11:00.116 READ: bw=57.9MiB/s (60.7MB/s), 7131KiB/s-20.9MiB/s (7302kB/s-21.9MB/s), io=58.1MiB (60.9MB), run=1002-1003msec 00:11:00.116 WRITE: bw=61.8MiB/s (64.8MB/s), 8167KiB/s-21.9MiB/s (8364kB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1002-1003msec 00:11:00.116 00:11:00.116 Disk stats (read/write): 00:11:00.116 nvme0n1: ios=4658/4895, merge=0/0, ticks=25750/23234, in_queue=48984, util=88.08% 00:11:00.116 nvme0n2: ios=1585/1799, merge=0/0, ticks=13762/21195, in_queue=34957, util=89.38% 00:11:00.116 nvme0n3: ios=2566/2688, merge=0/0, ticks=12900/11859, in_queue=24759, util=89.29% 00:11:00.116 nvme0n4: ios=4096/4354, merge=0/0, ticks=17167/15206, in_queue=32373, util=89.74% 00:11:00.116 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:00.116 [global] 00:11:00.116 thread=1 00:11:00.116 invalidate=1 00:11:00.116 rw=randwrite 00:11:00.116 time_based=1 00:11:00.116 runtime=1 00:11:00.116 ioengine=libaio 00:11:00.116 direct=1 00:11:00.116 bs=4096 00:11:00.116 iodepth=128 00:11:00.116 norandommap=0 00:11:00.116 numjobs=1 00:11:00.116 00:11:00.116 verify_dump=1 00:11:00.116 verify_backlog=512 00:11:00.116 verify_state_save=0 00:11:00.116 do_verify=1 00:11:00.116 verify=crc32c-intel 00:11:00.116 [job0] 00:11:00.116 filename=/dev/nvme0n1 00:11:00.116 [job1] 00:11:00.116 filename=/dev/nvme0n2 00:11:00.116 [job2] 00:11:00.116 filename=/dev/nvme0n3 00:11:00.116 [job3] 00:11:00.116 filename=/dev/nvme0n4 00:11:00.116 Could not set queue depth (nvme0n1) 00:11:00.116 Could not set queue depth (nvme0n2) 00:11:00.116 Could not set queue depth (nvme0n3) 00:11:00.116 Could not set queue depth (nvme0n4) 00:11:00.116 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.116 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.116 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.116 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.116 fio-3.35 00:11:00.116 Starting 4 threads 00:11:01.495 00:11:01.495 job0: (groupid=0, jobs=1): err= 0: pid=66835: Fri Nov 22 14:49:15 2024 00:11:01.495 read: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:11:01.495 slat (usec): min=4, max=8100, avg=154.94, stdev=670.87 00:11:01.495 clat (usec): min=2485, max=35932, avg=19820.04, stdev=7378.40 00:11:01.495 lat (usec): min=2496, max=35945, avg=19974.99, stdev=7418.89 00:11:01.495 clat percentiles (usec): 00:11:01.495 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[12649], 20.00th=[13173], 00:11:01.495 | 30.00th=[13566], 40.00th=[14091], 50.00th=[15795], 60.00th=[23725], 00:11:01.495 | 70.00th=[25560], 80.00th=[27657], 90.00th=[29492], 95.00th=[31589], 00:11:01.495 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:11:01.495 | 99.99th=[35914] 00:11:01.495 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:01.495 slat (usec): min=10, max=5486, avg=124.40, stdev=543.78 00:11:01.495 clat (usec): min=5734, max=28457, avg=16569.43, stdev=5071.53 00:11:01.495 lat (usec): min=5752, max=28694, avg=16693.82, stdev=5118.67 00:11:01.495 clat percentiles (usec): 00:11:01.495 | 1.00th=[ 8979], 5.00th=[11338], 10.00th=[12125], 20.00th=[12387], 00:11:01.495 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[16712], 00:11:01.495 | 70.00th=[19792], 80.00th=[22414], 90.00th=[23987], 95.00th=[25822], 00:11:01.495 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:11:01.495 | 99.99th=[28443] 00:11:01.495 bw ( KiB/s): min=10576, max=18132, per=20.77%, avg=14354.00, stdev=5342.90, samples=2 00:11:01.495 iops : min= 2644, max= 4533, avg=3588.50, stdev=1335.72, samples=2 00:11:01.495 lat (msec) : 4=0.13%, 10=1.99%, 20=60.07%, 50=37.81% 00:11:01.495 cpu : usr=3.59%, sys=9.18%, ctx=645, majf=0, minf=1 00:11:01.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:01.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.495 issued rwts: total=3403,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.495 job1: (groupid=0, jobs=1): err= 0: pid=66836: Fri Nov 22 14:49:15 2024 00:11:01.495 read: IOPS=5291, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1004msec) 00:11:01.495 slat (usec): min=6, max=8345, avg=87.20, stdev=535.16 00:11:01.495 clat (usec): min=1160, max=21152, avg=12104.72, stdev=1798.13 00:11:01.495 lat (usec): min=4808, max=25995, avg=12191.92, stdev=1801.81 00:11:01.495 clat percentiles (usec): 00:11:01.495 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11207], 00:11:01.495 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:11:01.495 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13829], 95.00th=[14615], 00:11:01.495 | 99.00th=[17433], 99.50th=[18744], 99.90th=[21103], 99.95th=[21103], 00:11:01.495 | 99.99th=[21103] 00:11:01.495 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:01.495 slat (usec): min=9, max=9317, avg=88.51, stdev=529.98 00:11:01.495 clat (usec): min=5350, max=17702, avg=11166.43, stdev=1458.80 00:11:01.495 lat (usec): min=7670, max=17723, avg=11254.94, stdev=1392.21 00:11:01.495 clat percentiles (usec): 00:11:01.495 | 1.00th=[ 7504], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[10028], 00:11:01.495 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:11:01.495 | 70.00th=[11731], 80.00th=[12256], 90.00th=[12911], 95.00th=[13173], 00:11:01.495 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:11:01.495 | 99.99th=[17695] 00:11:01.495 bw ( KiB/s): min=20521, max=24576, per=32.63%, avg=22548.50, stdev=2867.32, samples=2 00:11:01.495 iops : min= 5130, max= 6144, avg=5637.00, stdev=717.01, samples=2 00:11:01.495 lat (msec) : 2=0.01%, 10=12.86%, 20=86.92%, 50=0.22% 00:11:01.495 cpu : usr=4.59%, sys=13.86%, ctx=234, majf=0, minf=8 00:11:01.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:01.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.495 issued rwts: total=5313,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.495 job2: (groupid=0, jobs=1): err= 0: pid=66837: Fri Nov 22 14:49:15 2024 00:11:01.495 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:01.495 slat (usec): min=8, max=7928, avg=168.77, stdev=689.29 00:11:01.495 clat (usec): min=11402, max=39821, avg=21501.83, stdev=6721.09 00:11:01.495 lat (usec): min=14127, max=39834, avg=21670.60, stdev=6756.84 00:11:01.495 clat percentiles (usec): 00:11:01.495 | 1.00th=[12387], 5.00th=[14615], 10.00th=[14877], 20.00th=[15270], 00:11:01.496 | 30.00th=[15533], 40.00th=[15664], 50.00th=[16909], 60.00th=[24249], 00:11:01.496 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30802], 95.00th=[31851], 00:11:01.496 | 99.00th=[35390], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:11:01.496 | 99.99th=[39584] 00:11:01.496 write: IOPS=3322, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1003msec); 0 zone resets 00:11:01.496 slat (usec): min=11, max=5575, avg=136.45, stdev=541.88 00:11:01.496 clat (usec): min=2185, max=34494, avg=18146.83, stdev=4618.64 00:11:01.496 lat (usec): min=2205, max=34516, avg=18283.28, stdev=4622.19 00:11:01.496 clat percentiles (usec): 00:11:01.496 | 1.00th=[ 9503], 5.00th=[14353], 10.00th=[14615], 20.00th=[14746], 00:11:01.496 | 30.00th=[15008], 40.00th=[15139], 50.00th=[16057], 60.00th=[17957], 00:11:01.496 | 70.00th=[20579], 80.00th=[22414], 90.00th=[24773], 95.00th=[26870], 00:11:01.496 | 99.00th=[31851], 99.50th=[32375], 99.90th=[33817], 99.95th=[34341], 00:11:01.496 | 99.99th=[34341] 00:11:01.496 bw ( KiB/s): min= 9256, max=16416, per=18.57%, avg=12836.00, stdev=5062.88, samples=2 00:11:01.496 iops : min= 2314, max= 4104, avg=3209.00, stdev=1265.72, samples=2 00:11:01.496 lat (msec) : 4=0.30%, 10=0.25%, 20=59.17%, 50=40.29% 00:11:01.496 cpu : usr=3.09%, sys=9.38%, ctx=609, majf=0, minf=5 00:11:01.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:01.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.496 issued rwts: total=3072,3332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.496 job3: (groupid=0, jobs=1): err= 0: pid=66838: Fri Nov 22 14:49:15 2024 00:11:01.496 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:01.496 slat (usec): min=7, max=7327, avg=100.76, stdev=639.19 00:11:01.496 clat (usec): min=7988, max=23628, avg=14113.64, stdev=1953.80 00:11:01.496 lat (usec): min=8002, max=28102, avg=14214.41, stdev=1984.62 00:11:01.496 clat percentiles (usec): 00:11:01.496 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[12518], 20.00th=[13042], 00:11:01.496 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[14091], 00:11:01.496 | 70.00th=[14877], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:11:01.496 | 99.00th=[21365], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:11:01.496 | 99.99th=[23725] 00:11:01.496 write: IOPS=4789, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1002msec); 0 zone resets 00:11:01.496 slat (usec): min=5, max=11903, avg=104.01, stdev=635.63 00:11:01.496 clat (usec): min=709, max=20640, avg=12913.59, stdev=1858.79 00:11:01.496 lat (usec): min=5051, max=20666, avg=13017.60, stdev=1774.93 00:11:01.496 clat percentiles (usec): 00:11:01.496 | 1.00th=[ 6259], 5.00th=[10028], 10.00th=[11076], 20.00th=[11731], 00:11:01.496 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:11:01.496 | 70.00th=[13829], 80.00th=[14353], 90.00th=[14746], 95.00th=[15008], 00:11:01.496 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:11:01.496 | 99.99th=[20579] 00:11:01.496 bw ( KiB/s): min=18424, max=19016, per=27.09%, avg=18720.00, stdev=418.61, samples=2 00:11:01.496 iops : min= 4606, max= 4754, avg=4680.00, stdev=104.65, samples=2 00:11:01.496 lat (usec) : 750=0.01% 00:11:01.496 lat (msec) : 10=3.86%, 20=94.45%, 50=1.68% 00:11:01.496 cpu : usr=4.50%, sys=12.19%, ctx=202, majf=0, minf=5 00:11:01.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:01.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.496 issued rwts: total=4608,4799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.496 00:11:01.496 Run status group 0 (all jobs): 00:11:01.496 READ: bw=63.8MiB/s (66.9MB/s), 12.0MiB/s-20.7MiB/s (12.5MB/s-21.7MB/s), io=64.0MiB (67.2MB), run=1002-1004msec 00:11:01.496 WRITE: bw=67.5MiB/s (70.8MB/s), 13.0MiB/s-21.9MiB/s (13.6MB/s-23.0MB/s), io=67.8MiB (71.1MB), run=1002-1004msec 00:11:01.496 00:11:01.496 Disk stats (read/write): 00:11:01.496 nvme0n1: ios=3122/3168, merge=0/0, ticks=20322/16513, in_queue=36835, util=87.58% 00:11:01.496 nvme0n2: ios=4607/4608, merge=0/0, ticks=52637/48417, in_queue=101054, util=89.18% 00:11:01.496 nvme0n3: ios=2642/3072, merge=0/0, ticks=12777/12070, in_queue=24847, util=89.38% 00:11:01.496 nvme0n4: ios=3842/4096, merge=0/0, ticks=51708/49939, in_queue=101647, util=89.83% 00:11:01.496 14:49:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:01.496 14:49:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66851 00:11:01.496 14:49:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:01.496 14:49:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:01.496 [global] 00:11:01.496 thread=1 00:11:01.496 invalidate=1 00:11:01.496 rw=read 00:11:01.496 time_based=1 00:11:01.496 runtime=10 00:11:01.496 ioengine=libaio 00:11:01.496 direct=1 00:11:01.496 bs=4096 00:11:01.496 iodepth=1 00:11:01.496 norandommap=1 00:11:01.496 numjobs=1 00:11:01.496 00:11:01.496 [job0] 00:11:01.496 filename=/dev/nvme0n1 00:11:01.496 [job1] 00:11:01.496 filename=/dev/nvme0n2 00:11:01.496 [job2] 00:11:01.496 filename=/dev/nvme0n3 00:11:01.496 [job3] 00:11:01.496 filename=/dev/nvme0n4 00:11:01.496 Could not set queue depth (nvme0n1) 00:11:01.496 Could not set queue depth (nvme0n2) 00:11:01.496 Could not set queue depth (nvme0n3) 00:11:01.496 Could not set queue depth (nvme0n4) 00:11:01.496 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.496 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.496 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.496 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.496 fio-3.35 00:11:01.496 Starting 4 threads 00:11:04.779 14:49:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:04.779 fio: pid=66900, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:04.779 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=60522496, buflen=4096 00:11:04.779 14:49:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:05.037 fio: pid=66899, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.037 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69980160, buflen=4096 00:11:05.037 14:49:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.037 14:49:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:05.295 fio: pid=66897, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.295 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57188352, buflen=4096 00:11:05.295 14:49:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.295 14:49:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:05.554 fio: pid=66898, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.554 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2383872, buflen=4096 00:11:05.554 00:11:05.554 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66897: Fri Nov 22 14:49:20 2024 00:11:05.554 read: IOPS=3954, BW=15.4MiB/s (16.2MB/s)(54.5MiB/3531msec) 00:11:05.554 slat (usec): min=7, max=11254, avg=13.90, stdev=164.65 00:11:05.554 clat (nsec): min=1590, max=2957.8k, avg=237802.43, stdev=44717.62 00:11:05.554 lat (usec): min=131, max=11517, avg=251.70, stdev=171.36 00:11:05.554 clat percentiles (usec): 00:11:05.554 | 1.00th=[ 155], 5.00th=[ 200], 10.00th=[ 210], 20.00th=[ 219], 00:11:05.554 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:11:05.554 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:11:05.554 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 424], 99.95th=[ 644], 00:11:05.554 | 99.99th=[ 2507] 00:11:05.554 bw ( KiB/s): min=14912, max=16152, per=23.94%, avg=15676.17, stdev=497.88, samples=6 00:11:05.554 iops : min= 3728, max= 4038, avg=3919.00, stdev=124.47, samples=6 00:11:05.554 lat (usec) : 2=0.01%, 250=71.66%, 500=28.26%, 750=0.03%, 1000=0.01% 00:11:05.554 lat (msec) : 2=0.01%, 4=0.01% 00:11:05.554 cpu : usr=0.99%, sys=4.19%, ctx=13983, majf=0, minf=1 00:11:05.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.554 issued rwts: total=13963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.554 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66898: Fri Nov 22 14:49:20 2024 00:11:05.554 read: IOPS=4423, BW=17.3MiB/s (18.1MB/s)(66.3MiB/3836msec) 00:11:05.554 slat (usec): min=7, max=12444, avg=14.49, stdev=160.91 00:11:05.554 clat (nsec): min=1851, max=28888k, avg=210433.96, stdev=228982.95 00:11:05.554 lat (usec): min=126, max=39346, avg=224.92, stdev=337.37 00:11:05.554 clat percentiles (usec): 00:11:05.554 | 1.00th=[ 126], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 163], 00:11:05.554 | 30.00th=[ 180], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 227], 00:11:05.554 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:11:05.554 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 429], 99.95th=[ 578], 00:11:05.554 | 99.99th=[ 4178] 00:11:05.554 bw ( KiB/s): min=15872, max=21600, per=26.46%, avg=17324.86, stdev=2278.15, samples=7 00:11:05.554 iops : min= 3968, max= 5400, avg=4331.14, stdev=569.58, samples=7 00:11:05.554 lat (usec) : 2=0.01%, 250=85.91%, 500=14.02%, 750=0.01%, 1000=0.01% 00:11:05.554 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.01% 00:11:05.554 cpu : usr=1.10%, sys=4.90%, ctx=16979, majf=0, minf=2 00:11:05.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.554 issued rwts: total=16967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.554 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66899: Fri Nov 22 14:49:20 2024 00:11:05.554 read: IOPS=5242, BW=20.5MiB/s (21.5MB/s)(66.7MiB/3259msec) 00:11:05.554 slat (usec): min=10, max=7846, avg=13.35, stdev=82.77 00:11:05.554 clat (usec): min=141, max=2278, avg=176.21, stdev=39.08 00:11:05.554 lat (usec): min=153, max=8036, avg=189.56, stdev=91.62 00:11:05.554 clat percentiles (usec): 00:11:05.554 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:11:05.554 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:05.554 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 206], 00:11:05.555 | 99.00th=[ 229], 99.50th=[ 277], 99.90th=[ 619], 99.95th=[ 832], 00:11:05.555 | 99.99th=[ 2212] 00:11:05.555 bw ( KiB/s): min=20536, max=21376, per=32.24%, avg=21109.67, stdev=322.00, samples=6 00:11:05.555 iops : min= 5134, max= 5344, avg=5277.33, stdev=80.44, samples=6 00:11:05.555 lat (usec) : 250=99.38%, 500=0.47%, 750=0.08%, 1000=0.03% 00:11:05.555 lat (msec) : 2=0.02%, 4=0.01% 00:11:05.555 cpu : usr=1.29%, sys=5.65%, ctx=17089, majf=0, minf=1 00:11:05.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.555 issued rwts: total=17086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.555 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66900: Fri Nov 22 14:49:20 2024 00:11:05.555 read: IOPS=4983, BW=19.5MiB/s (20.4MB/s)(57.7MiB/2965msec) 00:11:05.555 slat (usec): min=9, max=254, avg=12.51, stdev= 4.00 00:11:05.555 clat (usec): min=2, max=2377, avg=187.08, stdev=45.21 00:11:05.555 lat (usec): min=152, max=2390, avg=199.59, stdev=45.79 00:11:05.555 clat percentiles (usec): 00:11:05.555 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:05.555 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:11:05.555 | 70.00th=[ 188], 80.00th=[ 215], 90.00th=[ 251], 95.00th=[ 265], 00:11:05.555 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 457], 00:11:05.555 | 99.99th=[ 2114] 00:11:05.555 bw ( KiB/s): min=14928, max=21920, per=30.01%, avg=19648.00, stdev=3143.32, samples=5 00:11:05.555 iops : min= 3732, max= 5480, avg=4912.00, stdev=785.83, samples=5 00:11:05.555 lat (usec) : 4=0.01%, 250=89.90%, 500=10.05%, 750=0.01%, 1000=0.01% 00:11:05.555 lat (msec) : 2=0.01%, 4=0.01% 00:11:05.555 cpu : usr=1.38%, sys=5.40%, ctx=14778, majf=0, minf=2 00:11:05.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.555 issued rwts: total=14777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.555 00:11:05.555 Run status group 0 (all jobs): 00:11:05.555 READ: bw=63.9MiB/s (67.0MB/s), 15.4MiB/s-20.5MiB/s (16.2MB/s-21.5MB/s), io=245MiB (257MB), run=2965-3836msec 00:11:05.555 00:11:05.555 Disk stats (read/write): 00:11:05.555 nvme0n1: ios=13237/0, merge=0/0, ticks=3054/0, in_queue=3054, util=95.33% 00:11:05.555 nvme0n2: ios=15623/0, merge=0/0, ticks=3319/0, in_queue=3319, util=95.69% 00:11:05.555 nvme0n3: ios=16331/0, merge=0/0, ticks=2923/0, in_queue=2923, util=96.46% 00:11:05.555 nvme0n4: ios=14244/0, merge=0/0, ticks=2708/0, in_queue=2708, util=96.79% 00:11:05.555 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.555 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:05.813 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.813 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:06.071 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.071 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:06.329 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.329 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:06.589 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.589 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66851 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.849 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.108 nvmf hotplug test: fio failed as expected 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:07.108 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.367 rmmod nvme_tcp 00:11:07.367 rmmod nvme_fabrics 00:11:07.367 rmmod nvme_keyring 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66477 ']' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66477 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66477 ']' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66477 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66477 00:11:07.367 killing process with pid 66477 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66477' 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66477 00:11:07.367 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66477 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:07.626 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:07.885 ************************************ 00:11:07.885 END TEST nvmf_fio_target 00:11:07.885 ************************************ 00:11:07.885 00:11:07.885 real 0m19.780s 00:11:07.885 user 1m13.202s 00:11:07.885 sys 0m10.801s 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.885 ************************************ 00:11:07.885 START TEST nvmf_bdevio 00:11:07.885 ************************************ 00:11:07.885 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.145 * Looking for test storage... 00:11:08.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:08.145 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.146 --rc genhtml_branch_coverage=1 00:11:08.146 --rc genhtml_function_coverage=1 00:11:08.146 --rc genhtml_legend=1 00:11:08.146 --rc geninfo_all_blocks=1 00:11:08.146 --rc geninfo_unexecuted_blocks=1 00:11:08.146 00:11:08.146 ' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.146 --rc genhtml_branch_coverage=1 00:11:08.146 --rc genhtml_function_coverage=1 00:11:08.146 --rc genhtml_legend=1 00:11:08.146 --rc geninfo_all_blocks=1 00:11:08.146 --rc geninfo_unexecuted_blocks=1 00:11:08.146 00:11:08.146 ' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.146 --rc genhtml_branch_coverage=1 00:11:08.146 --rc genhtml_function_coverage=1 00:11:08.146 --rc genhtml_legend=1 00:11:08.146 --rc geninfo_all_blocks=1 00:11:08.146 --rc geninfo_unexecuted_blocks=1 00:11:08.146 00:11:08.146 ' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.146 --rc genhtml_branch_coverage=1 00:11:08.146 --rc genhtml_function_coverage=1 00:11:08.146 --rc genhtml_legend=1 00:11:08.146 --rc geninfo_all_blocks=1 00:11:08.146 --rc geninfo_unexecuted_blocks=1 00:11:08.146 00:11:08.146 ' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.146 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:08.146 Cannot find device "nvmf_init_br" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:08.147 Cannot find device "nvmf_init_br2" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:08.147 Cannot find device "nvmf_tgt_br" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.147 Cannot find device "nvmf_tgt_br2" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:08.147 Cannot find device "nvmf_init_br" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:08.147 Cannot find device "nvmf_init_br2" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:08.147 Cannot find device "nvmf_tgt_br" 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:08.147 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:08.147 Cannot find device "nvmf_tgt_br2" 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:08.406 Cannot find device "nvmf_br" 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:08.406 Cannot find device "nvmf_init_if" 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:08.406 Cannot find device "nvmf_init_if2" 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.406 14:49:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:08.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:08.406 00:11:08.406 --- 10.0.0.3 ping statistics --- 00:11:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.406 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:08.406 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:08.406 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:08.406 00:11:08.406 --- 10.0.0.4 ping statistics --- 00:11:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.406 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:08.406 00:11:08.406 --- 10.0.0.1 ping statistics --- 00:11:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.406 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:08.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:08.406 00:11:08.406 --- 10.0.0.2 ping statistics --- 00:11:08.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.406 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.406 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67228 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67228 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67228 ']' 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.666 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.666 [2024-11-22 14:49:23.163499] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:11:08.666 [2024-11-22 14:49:23.163869] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.666 [2024-11-22 14:49:23.312723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.925 [2024-11-22 14:49:23.391308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.925 [2024-11-22 14:49:23.391928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.925 [2024-11-22 14:49:23.392380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.925 [2024-11-22 14:49:23.392591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.925 [2024-11-22 14:49:23.392604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.925 [2024-11-22 14:49:23.394245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.925 [2024-11-22 14:49:23.394438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:08.925 [2024-11-22 14:49:23.394525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.925 [2024-11-22 14:49:23.394525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:08.925 [2024-11-22 14:49:23.468929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.925 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.925 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:08.925 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.925 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.925 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.183 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.183 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.183 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.184 [2024-11-22 14:49:23.601285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.184 Malloc0 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.184 [2024-11-22 14:49:23.683009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.184 { 00:11:09.184 "params": { 00:11:09.184 "name": "Nvme$subsystem", 00:11:09.184 "trtype": "$TEST_TRANSPORT", 00:11:09.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.184 "adrfam": "ipv4", 00:11:09.184 "trsvcid": "$NVMF_PORT", 00:11:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.184 "hdgst": ${hdgst:-false}, 00:11:09.184 "ddgst": ${ddgst:-false} 00:11:09.184 }, 00:11:09.184 "method": "bdev_nvme_attach_controller" 00:11:09.184 } 00:11:09.184 EOF 00:11:09.184 )") 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:09.184 14:49:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.184 "params": { 00:11:09.184 "name": "Nvme1", 00:11:09.184 "trtype": "tcp", 00:11:09.184 "traddr": "10.0.0.3", 00:11:09.184 "adrfam": "ipv4", 00:11:09.184 "trsvcid": "4420", 00:11:09.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.184 "hdgst": false, 00:11:09.184 "ddgst": false 00:11:09.184 }, 00:11:09.184 "method": "bdev_nvme_attach_controller" 00:11:09.184 }' 00:11:09.184 [2024-11-22 14:49:23.749413] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:11:09.184 [2024-11-22 14:49:23.749535] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67256 ] 00:11:09.443 [2024-11-22 14:49:23.905087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.443 [2024-11-22 14:49:23.983622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.443 [2024-11-22 14:49:23.983776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.443 [2024-11-22 14:49:23.983785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.443 [2024-11-22 14:49:24.070950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.704 I/O targets: 00:11:09.704 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:09.704 00:11:09.704 00:11:09.704 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.704 http://cunit.sourceforge.net/ 00:11:09.704 00:11:09.704 00:11:09.704 Suite: bdevio tests on: Nvme1n1 00:11:09.704 Test: blockdev write read block ...passed 00:11:09.704 Test: blockdev write zeroes read block ...passed 00:11:09.704 Test: blockdev write zeroes read no split ...passed 00:11:09.704 Test: blockdev write zeroes read split ...passed 00:11:09.704 Test: blockdev write zeroes read split partial ...passed 00:11:09.704 Test: blockdev reset ...[2024-11-22 14:49:24.237848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:09.704 [2024-11-22 14:49:24.238130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc52180 (9): Bad file descriptor 00:11:09.704 passed 00:11:09.704 Test: blockdev write read 8 blocks ...[2024-11-22 14:49:24.249850] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:09.704 passed 00:11:09.704 Test: blockdev write read size > 128k ...passed 00:11:09.704 Test: blockdev write read invalid size ...passed 00:11:09.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.704 Test: blockdev write read max offset ...passed 00:11:09.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.704 Test: blockdev writev readv 8 blocks ...passed 00:11:09.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.704 Test: blockdev writev readv block ...passed 00:11:09.704 Test: blockdev writev readv size > 128k ...passed 00:11:09.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.704 Test: blockdev comparev and writev ...[2024-11-22 14:49:24.261449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.261501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.261529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.261544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.261961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.261988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.262009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.262465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.262491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.262513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.262525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.262969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.263005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.263028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.704 [2024-11-22 14:49:24.263041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:09.704 passed 00:11:09.704 Test: blockdev nvme passthru rw ...passed 00:11:09.704 Test: blockdev nvme passthru vendor specific ...[2024-11-22 14:49:24.264387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.704 [2024-11-22 14:49:24.264423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.264573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.704 [2024-11-22 14:49:24.264593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.264742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.704 [2024-11-22 14:49:24.264771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:09.704 [2024-11-22 14:49:24.264919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.704 [2024-11-22 14:49:24.264951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:09.704 passed 00:11:09.704 Test: blockdev nvme admin passthru ...passed 00:11:09.704 Test: blockdev copy ...passed 00:11:09.704 00:11:09.704 Run Summary: Type Total Ran Passed Failed Inactive 00:11:09.704 suites 1 1 n/a 0 0 00:11:09.704 tests 23 23 23 0 0 00:11:09.704 asserts 152 152 152 0 n/a 00:11:09.704 00:11:09.704 Elapsed time = 0.156 seconds 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.967 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.226 rmmod nvme_tcp 00:11:10.226 rmmod nvme_fabrics 00:11:10.226 rmmod nvme_keyring 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67228 ']' 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67228 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67228 ']' 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67228 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67228 00:11:10.226 killing process with pid 67228 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67228' 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67228 00:11:10.226 14:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67228 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:10.486 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:10.745 00:11:10.745 real 0m2.850s 00:11:10.745 user 0m8.115s 00:11:10.745 sys 0m0.991s 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.745 ************************************ 00:11:10.745 END TEST nvmf_bdevio 00:11:10.745 ************************************ 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:10.745 ************************************ 00:11:10.745 END TEST nvmf_target_core 00:11:10.745 ************************************ 00:11:10.745 00:11:10.745 real 2m36.295s 00:11:10.745 user 6m49.581s 00:11:10.745 sys 0m54.492s 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.745 14:49:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.745 14:49:25 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.745 14:49:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.745 14:49:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.745 14:49:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.005 ************************************ 00:11:11.005 START TEST nvmf_target_extra 00:11:11.005 ************************************ 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:11.005 * Looking for test storage... 00:11:11.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.005 --rc genhtml_branch_coverage=1 00:11:11.005 --rc genhtml_function_coverage=1 00:11:11.005 --rc genhtml_legend=1 00:11:11.005 --rc geninfo_all_blocks=1 00:11:11.005 --rc geninfo_unexecuted_blocks=1 00:11:11.005 00:11:11.005 ' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.005 --rc genhtml_branch_coverage=1 00:11:11.005 --rc genhtml_function_coverage=1 00:11:11.005 --rc genhtml_legend=1 00:11:11.005 --rc geninfo_all_blocks=1 00:11:11.005 --rc geninfo_unexecuted_blocks=1 00:11:11.005 00:11:11.005 ' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.005 --rc genhtml_branch_coverage=1 00:11:11.005 --rc genhtml_function_coverage=1 00:11:11.005 --rc genhtml_legend=1 00:11:11.005 --rc geninfo_all_blocks=1 00:11:11.005 --rc geninfo_unexecuted_blocks=1 00:11:11.005 00:11:11.005 ' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.005 --rc genhtml_branch_coverage=1 00:11:11.005 --rc genhtml_function_coverage=1 00:11:11.005 --rc genhtml_legend=1 00:11:11.005 --rc geninfo_all_blocks=1 00:11:11.005 --rc geninfo_unexecuted_blocks=1 00:11:11.005 00:11:11.005 ' 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.005 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.006 ************************************ 00:11:11.006 START TEST nvmf_auth_target 00:11:11.006 ************************************ 00:11:11.006 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:11.266 * Looking for test storage... 00:11:11.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.266 --rc genhtml_branch_coverage=1 00:11:11.266 --rc genhtml_function_coverage=1 00:11:11.266 --rc genhtml_legend=1 00:11:11.266 --rc geninfo_all_blocks=1 00:11:11.266 --rc geninfo_unexecuted_blocks=1 00:11:11.266 00:11:11.266 ' 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.266 --rc genhtml_branch_coverage=1 00:11:11.266 --rc genhtml_function_coverage=1 00:11:11.266 --rc genhtml_legend=1 00:11:11.266 --rc geninfo_all_blocks=1 00:11:11.266 --rc geninfo_unexecuted_blocks=1 00:11:11.266 00:11:11.266 ' 00:11:11.266 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.266 --rc genhtml_branch_coverage=1 00:11:11.266 --rc genhtml_function_coverage=1 00:11:11.266 --rc genhtml_legend=1 00:11:11.266 --rc geninfo_all_blocks=1 00:11:11.267 --rc geninfo_unexecuted_blocks=1 00:11:11.267 00:11:11.267 ' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.267 --rc genhtml_branch_coverage=1 00:11:11.267 --rc genhtml_function_coverage=1 00:11:11.267 --rc genhtml_legend=1 00:11:11.267 --rc geninfo_all_blocks=1 00:11:11.267 --rc geninfo_unexecuted_blocks=1 00:11:11.267 00:11:11.267 ' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.267 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:11.267 Cannot find device "nvmf_init_br" 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:11.267 Cannot find device "nvmf_init_br2" 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:11.267 Cannot find device "nvmf_tgt_br" 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:11.267 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.268 Cannot find device "nvmf_tgt_br2" 00:11:11.268 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:11.268 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:11.527 Cannot find device "nvmf_init_br" 00:11:11.527 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:11.527 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:11.527 Cannot find device "nvmf_init_br2" 00:11:11.527 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:11.527 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:11.528 Cannot find device "nvmf_tgt_br" 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:11.528 Cannot find device "nvmf_tgt_br2" 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:11.528 Cannot find device "nvmf_br" 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:11.528 Cannot find device "nvmf_init_if" 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:11.528 14:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:11.528 Cannot find device "nvmf_init_if2" 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.528 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:11.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:11:11.787 00:11:11.787 --- 10.0.0.3 ping statistics --- 00:11:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.787 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:11.787 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:11.787 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:11:11.787 00:11:11.787 --- 10.0.0.4 ping statistics --- 00:11:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.787 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:11.787 00:11:11.787 --- 10.0.0.1 ping statistics --- 00:11:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.787 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:11.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:11.787 00:11:11.787 --- 10.0.0.2 ping statistics --- 00:11:11.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.787 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67550 00:11:11.787 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67550 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67550 ']' 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.788 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.355 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.355 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:12.355 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.355 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.355 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67569 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1c21140cbf857513649ea393e58d578c02e1b2f5af5bf57d 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fNK 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1c21140cbf857513649ea393e58d578c02e1b2f5af5bf57d 0 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1c21140cbf857513649ea393e58d578c02e1b2f5af5bf57d 0 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1c21140cbf857513649ea393e58d578c02e1b2f5af5bf57d 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fNK 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fNK 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.fNK 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=89781fec6d03baaa17b340c30cc83113a3fdd42a053afccc40c9cda9df7aa654 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EW6 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 89781fec6d03baaa17b340c30cc83113a3fdd42a053afccc40c9cda9df7aa654 3 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 89781fec6d03baaa17b340c30cc83113a3fdd42a053afccc40c9cda9df7aa654 3 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=89781fec6d03baaa17b340c30cc83113a3fdd42a053afccc40c9cda9df7aa654 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EW6 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EW6 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.EW6 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=31c95783e40f475edae0b29da778424e 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.F5I 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 31c95783e40f475edae0b29da778424e 1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 31c95783e40f475edae0b29da778424e 1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=31c95783e40f475edae0b29da778424e 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:12.356 14:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.F5I 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.F5I 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.F5I 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f23fd8d5aba2d524ad16565dc3631dbf2cc8d555d2be0a15 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.F9z 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f23fd8d5aba2d524ad16565dc3631dbf2cc8d555d2be0a15 2 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f23fd8d5aba2d524ad16565dc3631dbf2cc8d555d2be0a15 2 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f23fd8d5aba2d524ad16565dc3631dbf2cc8d555d2be0a15 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.F9z 00:11:12.615 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.F9z 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.F9z 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a59c7598091a689e542fbe7ab33efb59964baeace32eb5f6 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AM5 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a59c7598091a689e542fbe7ab33efb59964baeace32eb5f6 2 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a59c7598091a689e542fbe7ab33efb59964baeace32eb5f6 2 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a59c7598091a689e542fbe7ab33efb59964baeace32eb5f6 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AM5 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AM5 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AM5 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=768a6e4369bb0f09892ecbd50c26c6f8 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bNq 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 768a6e4369bb0f09892ecbd50c26c6f8 1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 768a6e4369bb0f09892ecbd50c26c6f8 1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=768a6e4369bb0f09892ecbd50c26c6f8 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bNq 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bNq 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.bNq 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c135d84e0868e01d5b2ade11221b1144f2fbcf0a0b9107a048f6cf94b1f2bd8 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.weA 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c135d84e0868e01d5b2ade11221b1144f2fbcf0a0b9107a048f6cf94b1f2bd8 3 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c135d84e0868e01d5b2ade11221b1144f2fbcf0a0b9107a048f6cf94b1f2bd8 3 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c135d84e0868e01d5b2ade11221b1144f2fbcf0a0b9107a048f6cf94b1f2bd8 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:12.616 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.weA 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.weA 00:11:12.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.weA 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67550 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67550 ']' 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.875 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67569 /var/tmp/host.sock 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67569 ']' 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:13.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.133 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fNK 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fNK 00:11:13.392 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fNK 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.EW6 ]] 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EW6 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EW6 00:11:13.651 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EW6 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.F5I 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.F5I 00:11:13.909 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.F5I 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.F9z ]] 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F9z 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F9z 00:11:14.477 14:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F9z 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AM5 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AM5 00:11:14.477 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AM5 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.bNq ]] 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bNq 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bNq 00:11:14.736 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bNq 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.weA 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.weA 00:11:14.995 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.weA 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.254 14:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.513 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.772 00:11:15.772 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.772 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.772 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.031 { 00:11:16.031 "cntlid": 1, 00:11:16.031 "qid": 0, 00:11:16.031 "state": "enabled", 00:11:16.031 "thread": "nvmf_tgt_poll_group_000", 00:11:16.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:16.031 "listen_address": { 00:11:16.031 "trtype": "TCP", 00:11:16.031 "adrfam": "IPv4", 00:11:16.031 "traddr": "10.0.0.3", 00:11:16.031 "trsvcid": "4420" 00:11:16.031 }, 00:11:16.031 "peer_address": { 00:11:16.031 "trtype": "TCP", 00:11:16.031 "adrfam": "IPv4", 00:11:16.031 "traddr": "10.0.0.1", 00:11:16.031 "trsvcid": "57780" 00:11:16.031 }, 00:11:16.031 "auth": { 00:11:16.031 "state": "completed", 00:11:16.031 "digest": "sha256", 00:11:16.031 "dhgroup": "null" 00:11:16.031 } 00:11:16.031 } 00:11:16.031 ]' 00:11:16.031 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.291 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.549 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:16.549 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:20.740 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:20.741 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:20.999 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:20.999 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.999 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.999 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:20.999 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.000 14:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.567 00:11:21.567 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.567 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.567 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.827 { 00:11:21.827 "cntlid": 3, 00:11:21.827 "qid": 0, 00:11:21.827 "state": "enabled", 00:11:21.827 "thread": "nvmf_tgt_poll_group_000", 00:11:21.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:21.827 "listen_address": { 00:11:21.827 "trtype": "TCP", 00:11:21.827 "adrfam": "IPv4", 00:11:21.827 "traddr": "10.0.0.3", 00:11:21.827 "trsvcid": "4420" 00:11:21.827 }, 00:11:21.827 "peer_address": { 00:11:21.827 "trtype": "TCP", 00:11:21.827 "adrfam": "IPv4", 00:11:21.827 "traddr": "10.0.0.1", 00:11:21.827 "trsvcid": "57814" 00:11:21.827 }, 00:11:21.827 "auth": { 00:11:21.827 "state": "completed", 00:11:21.827 "digest": "sha256", 00:11:21.827 "dhgroup": "null" 00:11:21.827 } 00:11:21.827 } 00:11:21.827 ]' 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.827 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.395 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:22.395 14:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:22.963 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.222 14:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.789 00:11:23.789 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.789 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.789 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.048 { 00:11:24.048 "cntlid": 5, 00:11:24.048 "qid": 0, 00:11:24.048 "state": "enabled", 00:11:24.048 "thread": "nvmf_tgt_poll_group_000", 00:11:24.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:24.048 "listen_address": { 00:11:24.048 "trtype": "TCP", 00:11:24.048 "adrfam": "IPv4", 00:11:24.048 "traddr": "10.0.0.3", 00:11:24.048 "trsvcid": "4420" 00:11:24.048 }, 00:11:24.048 "peer_address": { 00:11:24.048 "trtype": "TCP", 00:11:24.048 "adrfam": "IPv4", 00:11:24.048 "traddr": "10.0.0.1", 00:11:24.048 "trsvcid": "42350" 00:11:24.048 }, 00:11:24.048 "auth": { 00:11:24.048 "state": "completed", 00:11:24.048 "digest": "sha256", 00:11:24.048 "dhgroup": "null" 00:11:24.048 } 00:11:24.048 } 00:11:24.048 ]' 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.048 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.615 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:24.615 14:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:25.181 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.440 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.700 00:11:25.700 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.700 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.700 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.268 { 00:11:26.268 "cntlid": 7, 00:11:26.268 "qid": 0, 00:11:26.268 "state": "enabled", 00:11:26.268 "thread": "nvmf_tgt_poll_group_000", 00:11:26.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:26.268 "listen_address": { 00:11:26.268 "trtype": "TCP", 00:11:26.268 "adrfam": "IPv4", 00:11:26.268 "traddr": "10.0.0.3", 00:11:26.268 "trsvcid": "4420" 00:11:26.268 }, 00:11:26.268 "peer_address": { 00:11:26.268 "trtype": "TCP", 00:11:26.268 "adrfam": "IPv4", 00:11:26.268 "traddr": "10.0.0.1", 00:11:26.268 "trsvcid": "42380" 00:11:26.268 }, 00:11:26.268 "auth": { 00:11:26.268 "state": "completed", 00:11:26.268 "digest": "sha256", 00:11:26.268 "dhgroup": "null" 00:11:26.268 } 00:11:26.268 } 00:11:26.268 ]' 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.268 14:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.526 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:26.526 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.200 14:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.458 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:27.458 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.458 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.458 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.458 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:27.459 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.459 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.459 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.459 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.717 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.717 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.717 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.717 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.976 00:11:27.976 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.976 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.976 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.235 { 00:11:28.235 "cntlid": 9, 00:11:28.235 "qid": 0, 00:11:28.235 "state": "enabled", 00:11:28.235 "thread": "nvmf_tgt_poll_group_000", 00:11:28.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:28.235 "listen_address": { 00:11:28.235 "trtype": "TCP", 00:11:28.235 "adrfam": "IPv4", 00:11:28.235 "traddr": "10.0.0.3", 00:11:28.235 "trsvcid": "4420" 00:11:28.235 }, 00:11:28.235 "peer_address": { 00:11:28.235 "trtype": "TCP", 00:11:28.235 "adrfam": "IPv4", 00:11:28.235 "traddr": "10.0.0.1", 00:11:28.235 "trsvcid": "42398" 00:11:28.235 }, 00:11:28.235 "auth": { 00:11:28.235 "state": "completed", 00:11:28.235 "digest": "sha256", 00:11:28.235 "dhgroup": "ffdhe2048" 00:11:28.235 } 00:11:28.235 } 00:11:28.235 ]' 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.235 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.493 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.493 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.493 14:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.751 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:28.751 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:29.318 14:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.577 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.835 00:11:29.835 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.835 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.835 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.094 { 00:11:30.094 "cntlid": 11, 00:11:30.094 "qid": 0, 00:11:30.094 "state": "enabled", 00:11:30.094 "thread": "nvmf_tgt_poll_group_000", 00:11:30.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:30.094 "listen_address": { 00:11:30.094 "trtype": "TCP", 00:11:30.094 "adrfam": "IPv4", 00:11:30.094 "traddr": "10.0.0.3", 00:11:30.094 "trsvcid": "4420" 00:11:30.094 }, 00:11:30.094 "peer_address": { 00:11:30.094 "trtype": "TCP", 00:11:30.094 "adrfam": "IPv4", 00:11:30.094 "traddr": "10.0.0.1", 00:11:30.094 "trsvcid": "42424" 00:11:30.094 }, 00:11:30.094 "auth": { 00:11:30.094 "state": "completed", 00:11:30.094 "digest": "sha256", 00:11:30.094 "dhgroup": "ffdhe2048" 00:11:30.094 } 00:11:30.094 } 00:11:30.094 ]' 00:11:30.094 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.352 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.611 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:30.611 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:31.178 14:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.744 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.002 00:11:32.002 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.002 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.002 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.261 { 00:11:32.261 "cntlid": 13, 00:11:32.261 "qid": 0, 00:11:32.261 "state": "enabled", 00:11:32.261 "thread": "nvmf_tgt_poll_group_000", 00:11:32.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:32.261 "listen_address": { 00:11:32.261 "trtype": "TCP", 00:11:32.261 "adrfam": "IPv4", 00:11:32.261 "traddr": "10.0.0.3", 00:11:32.261 "trsvcid": "4420" 00:11:32.261 }, 00:11:32.261 "peer_address": { 00:11:32.261 "trtype": "TCP", 00:11:32.261 "adrfam": "IPv4", 00:11:32.261 "traddr": "10.0.0.1", 00:11:32.261 "trsvcid": "40412" 00:11:32.261 }, 00:11:32.261 "auth": { 00:11:32.261 "state": "completed", 00:11:32.261 "digest": "sha256", 00:11:32.261 "dhgroup": "ffdhe2048" 00:11:32.261 } 00:11:32.261 } 00:11:32.261 ]' 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:32.261 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.520 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.520 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.520 14:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.779 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:32.779 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:33.346 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:33.347 14:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.606 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.865 00:11:33.865 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.865 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.865 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.432 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.432 { 00:11:34.432 "cntlid": 15, 00:11:34.432 "qid": 0, 00:11:34.432 "state": "enabled", 00:11:34.432 "thread": "nvmf_tgt_poll_group_000", 00:11:34.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:34.432 "listen_address": { 00:11:34.432 "trtype": "TCP", 00:11:34.433 "adrfam": "IPv4", 00:11:34.433 "traddr": "10.0.0.3", 00:11:34.433 "trsvcid": "4420" 00:11:34.433 }, 00:11:34.433 "peer_address": { 00:11:34.433 "trtype": "TCP", 00:11:34.433 "adrfam": "IPv4", 00:11:34.433 "traddr": "10.0.0.1", 00:11:34.433 "trsvcid": "40448" 00:11:34.433 }, 00:11:34.433 "auth": { 00:11:34.433 "state": "completed", 00:11:34.433 "digest": "sha256", 00:11:34.433 "dhgroup": "ffdhe2048" 00:11:34.433 } 00:11:34.433 } 00:11:34.433 ]' 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.433 14:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.697 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:34.697 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.272 14:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.545 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.803 00:11:35.803 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.803 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.803 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.061 { 00:11:36.061 "cntlid": 17, 00:11:36.061 "qid": 0, 00:11:36.061 "state": "enabled", 00:11:36.061 "thread": "nvmf_tgt_poll_group_000", 00:11:36.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:36.061 "listen_address": { 00:11:36.061 "trtype": "TCP", 00:11:36.061 "adrfam": "IPv4", 00:11:36.061 "traddr": "10.0.0.3", 00:11:36.061 "trsvcid": "4420" 00:11:36.061 }, 00:11:36.061 "peer_address": { 00:11:36.061 "trtype": "TCP", 00:11:36.061 "adrfam": "IPv4", 00:11:36.061 "traddr": "10.0.0.1", 00:11:36.061 "trsvcid": "40460" 00:11:36.061 }, 00:11:36.061 "auth": { 00:11:36.061 "state": "completed", 00:11:36.061 "digest": "sha256", 00:11:36.061 "dhgroup": "ffdhe3072" 00:11:36.061 } 00:11:36.061 } 00:11:36.061 ]' 00:11:36.061 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.320 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.578 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:36.578 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.177 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.436 14:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.694 00:11:37.694 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.694 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.694 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.952 { 00:11:37.952 "cntlid": 19, 00:11:37.952 "qid": 0, 00:11:37.952 "state": "enabled", 00:11:37.952 "thread": "nvmf_tgt_poll_group_000", 00:11:37.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:37.952 "listen_address": { 00:11:37.952 "trtype": "TCP", 00:11:37.952 "adrfam": "IPv4", 00:11:37.952 "traddr": "10.0.0.3", 00:11:37.952 "trsvcid": "4420" 00:11:37.952 }, 00:11:37.952 "peer_address": { 00:11:37.952 "trtype": "TCP", 00:11:37.952 "adrfam": "IPv4", 00:11:37.952 "traddr": "10.0.0.1", 00:11:37.952 "trsvcid": "40482" 00:11:37.952 }, 00:11:37.952 "auth": { 00:11:37.952 "state": "completed", 00:11:37.952 "digest": "sha256", 00:11:37.952 "dhgroup": "ffdhe3072" 00:11:37.952 } 00:11:37.952 } 00:11:37.952 ]' 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.952 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.211 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:38.211 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.211 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.211 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.211 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.469 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:38.469 14:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.035 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.293 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.859 00:11:39.859 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.859 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.859 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.118 { 00:11:40.118 "cntlid": 21, 00:11:40.118 "qid": 0, 00:11:40.118 "state": "enabled", 00:11:40.118 "thread": "nvmf_tgt_poll_group_000", 00:11:40.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:40.118 "listen_address": { 00:11:40.118 "trtype": "TCP", 00:11:40.118 "adrfam": "IPv4", 00:11:40.118 "traddr": "10.0.0.3", 00:11:40.118 "trsvcid": "4420" 00:11:40.118 }, 00:11:40.118 "peer_address": { 00:11:40.118 "trtype": "TCP", 00:11:40.118 "adrfam": "IPv4", 00:11:40.118 "traddr": "10.0.0.1", 00:11:40.118 "trsvcid": "40506" 00:11:40.118 }, 00:11:40.118 "auth": { 00:11:40.118 "state": "completed", 00:11:40.118 "digest": "sha256", 00:11:40.118 "dhgroup": "ffdhe3072" 00:11:40.118 } 00:11:40.118 } 00:11:40.118 ]' 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.118 14:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.377 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:40.377 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:41.316 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.574 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.575 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.833 00:11:41.833 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.833 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.833 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.091 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.091 { 00:11:42.091 "cntlid": 23, 00:11:42.091 "qid": 0, 00:11:42.091 "state": "enabled", 00:11:42.091 "thread": "nvmf_tgt_poll_group_000", 00:11:42.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:42.091 "listen_address": { 00:11:42.091 "trtype": "TCP", 00:11:42.091 "adrfam": "IPv4", 00:11:42.091 "traddr": "10.0.0.3", 00:11:42.091 "trsvcid": "4420" 00:11:42.091 }, 00:11:42.091 "peer_address": { 00:11:42.091 "trtype": "TCP", 00:11:42.091 "adrfam": "IPv4", 00:11:42.091 "traddr": "10.0.0.1", 00:11:42.091 "trsvcid": "40548" 00:11:42.091 }, 00:11:42.092 "auth": { 00:11:42.092 "state": "completed", 00:11:42.092 "digest": "sha256", 00:11:42.092 "dhgroup": "ffdhe3072" 00:11:42.092 } 00:11:42.092 } 00:11:42.092 ]' 00:11:42.092 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.092 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.092 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.350 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:42.350 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.350 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.350 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.350 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.608 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:42.608 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.176 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.744 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.003 00:11:44.003 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.003 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.003 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.262 { 00:11:44.262 "cntlid": 25, 00:11:44.262 "qid": 0, 00:11:44.262 "state": "enabled", 00:11:44.262 "thread": "nvmf_tgt_poll_group_000", 00:11:44.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:44.262 "listen_address": { 00:11:44.262 "trtype": "TCP", 00:11:44.262 "adrfam": "IPv4", 00:11:44.262 "traddr": "10.0.0.3", 00:11:44.262 "trsvcid": "4420" 00:11:44.262 }, 00:11:44.262 "peer_address": { 00:11:44.262 "trtype": "TCP", 00:11:44.262 "adrfam": "IPv4", 00:11:44.262 "traddr": "10.0.0.1", 00:11:44.262 "trsvcid": "59102" 00:11:44.262 }, 00:11:44.262 "auth": { 00:11:44.262 "state": "completed", 00:11:44.262 "digest": "sha256", 00:11:44.262 "dhgroup": "ffdhe4096" 00:11:44.262 } 00:11:44.262 } 00:11:44.262 ]' 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.262 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.521 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.521 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.521 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.521 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.521 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.781 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:44.781 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.349 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.921 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.922 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.180 00:11:46.180 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.180 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.180 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.440 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.440 { 00:11:46.440 "cntlid": 27, 00:11:46.440 "qid": 0, 00:11:46.440 "state": "enabled", 00:11:46.440 "thread": "nvmf_tgt_poll_group_000", 00:11:46.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:46.440 "listen_address": { 00:11:46.440 "trtype": "TCP", 00:11:46.440 "adrfam": "IPv4", 00:11:46.440 "traddr": "10.0.0.3", 00:11:46.440 "trsvcid": "4420" 00:11:46.440 }, 00:11:46.440 "peer_address": { 00:11:46.440 "trtype": "TCP", 00:11:46.440 "adrfam": "IPv4", 00:11:46.440 "traddr": "10.0.0.1", 00:11:46.440 "trsvcid": "59130" 00:11:46.440 }, 00:11:46.440 "auth": { 00:11:46.440 "state": "completed", 00:11:46.440 "digest": "sha256", 00:11:46.440 "dhgroup": "ffdhe4096" 00:11:46.440 } 00:11:46.440 } 00:11:46.440 ]' 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.440 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.698 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:46.698 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.698 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.698 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.698 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.957 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:46.957 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.525 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.784 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.352 00:11:48.352 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.352 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.352 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.611 { 00:11:48.611 "cntlid": 29, 00:11:48.611 "qid": 0, 00:11:48.611 "state": "enabled", 00:11:48.611 "thread": "nvmf_tgt_poll_group_000", 00:11:48.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:48.611 "listen_address": { 00:11:48.611 "trtype": "TCP", 00:11:48.611 "adrfam": "IPv4", 00:11:48.611 "traddr": "10.0.0.3", 00:11:48.611 "trsvcid": "4420" 00:11:48.611 }, 00:11:48.611 "peer_address": { 00:11:48.611 "trtype": "TCP", 00:11:48.611 "adrfam": "IPv4", 00:11:48.611 "traddr": "10.0.0.1", 00:11:48.611 "trsvcid": "59154" 00:11:48.611 }, 00:11:48.611 "auth": { 00:11:48.611 "state": "completed", 00:11:48.611 "digest": "sha256", 00:11:48.611 "dhgroup": "ffdhe4096" 00:11:48.611 } 00:11:48.611 } 00:11:48.611 ]' 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.611 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.612 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.871 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:48.871 14:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:49.808 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.067 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.326 00:11:50.326 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.326 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.326 14:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.585 { 00:11:50.585 "cntlid": 31, 00:11:50.585 "qid": 0, 00:11:50.585 "state": "enabled", 00:11:50.585 "thread": "nvmf_tgt_poll_group_000", 00:11:50.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:50.585 "listen_address": { 00:11:50.585 "trtype": "TCP", 00:11:50.585 "adrfam": "IPv4", 00:11:50.585 "traddr": "10.0.0.3", 00:11:50.585 "trsvcid": "4420" 00:11:50.585 }, 00:11:50.585 "peer_address": { 00:11:50.585 "trtype": "TCP", 00:11:50.585 "adrfam": "IPv4", 00:11:50.585 "traddr": "10.0.0.1", 00:11:50.585 "trsvcid": "59192" 00:11:50.585 }, 00:11:50.585 "auth": { 00:11:50.585 "state": "completed", 00:11:50.585 "digest": "sha256", 00:11:50.585 "dhgroup": "ffdhe4096" 00:11:50.585 } 00:11:50.585 } 00:11:50.585 ]' 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.585 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.844 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:50.844 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.844 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.844 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.844 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.104 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:51.104 14:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.041 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.608 00:11:52.608 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.608 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.608 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.867 { 00:11:52.867 "cntlid": 33, 00:11:52.867 "qid": 0, 00:11:52.867 "state": "enabled", 00:11:52.867 "thread": "nvmf_tgt_poll_group_000", 00:11:52.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:52.867 "listen_address": { 00:11:52.867 "trtype": "TCP", 00:11:52.867 "adrfam": "IPv4", 00:11:52.867 "traddr": "10.0.0.3", 00:11:52.867 "trsvcid": "4420" 00:11:52.867 }, 00:11:52.867 "peer_address": { 00:11:52.867 "trtype": "TCP", 00:11:52.867 "adrfam": "IPv4", 00:11:52.867 "traddr": "10.0.0.1", 00:11:52.867 "trsvcid": "55896" 00:11:52.867 }, 00:11:52.867 "auth": { 00:11:52.867 "state": "completed", 00:11:52.867 "digest": "sha256", 00:11:52.867 "dhgroup": "ffdhe6144" 00:11:52.867 } 00:11:52.867 } 00:11:52.867 ]' 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.867 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.126 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.126 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.126 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.386 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:53.386 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:53.954 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.213 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.780 00:11:54.780 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.780 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.780 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.038 { 00:11:55.038 "cntlid": 35, 00:11:55.038 "qid": 0, 00:11:55.038 "state": "enabled", 00:11:55.038 "thread": "nvmf_tgt_poll_group_000", 00:11:55.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:55.038 "listen_address": { 00:11:55.038 "trtype": "TCP", 00:11:55.038 "adrfam": "IPv4", 00:11:55.038 "traddr": "10.0.0.3", 00:11:55.038 "trsvcid": "4420" 00:11:55.038 }, 00:11:55.038 "peer_address": { 00:11:55.038 "trtype": "TCP", 00:11:55.038 "adrfam": "IPv4", 00:11:55.038 "traddr": "10.0.0.1", 00:11:55.038 "trsvcid": "55922" 00:11:55.038 }, 00:11:55.038 "auth": { 00:11:55.038 "state": "completed", 00:11:55.038 "digest": "sha256", 00:11:55.038 "dhgroup": "ffdhe6144" 00:11:55.038 } 00:11:55.038 } 00:11:55.038 ]' 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.038 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.296 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.296 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.296 14:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.554 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:55.555 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:56.121 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.380 14:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.380 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.947 00:11:56.947 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.947 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.947 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.206 { 00:11:57.206 "cntlid": 37, 00:11:57.206 "qid": 0, 00:11:57.206 "state": "enabled", 00:11:57.206 "thread": "nvmf_tgt_poll_group_000", 00:11:57.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:57.206 "listen_address": { 00:11:57.206 "trtype": "TCP", 00:11:57.206 "adrfam": "IPv4", 00:11:57.206 "traddr": "10.0.0.3", 00:11:57.206 "trsvcid": "4420" 00:11:57.206 }, 00:11:57.206 "peer_address": { 00:11:57.206 "trtype": "TCP", 00:11:57.206 "adrfam": "IPv4", 00:11:57.206 "traddr": "10.0.0.1", 00:11:57.206 "trsvcid": "55952" 00:11:57.206 }, 00:11:57.206 "auth": { 00:11:57.206 "state": "completed", 00:11:57.206 "digest": "sha256", 00:11:57.206 "dhgroup": "ffdhe6144" 00:11:57.206 } 00:11:57.206 } 00:11:57.206 ]' 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.206 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.464 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.465 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.465 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.465 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.465 14:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.723 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:57.723 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:11:58.289 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:58.548 14:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.806 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.807 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.374 00:11:59.374 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.374 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.374 14:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.633 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.633 { 00:11:59.633 "cntlid": 39, 00:11:59.633 "qid": 0, 00:11:59.633 "state": "enabled", 00:11:59.633 "thread": "nvmf_tgt_poll_group_000", 00:11:59.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:11:59.633 "listen_address": { 00:11:59.633 "trtype": "TCP", 00:11:59.633 "adrfam": "IPv4", 00:11:59.633 "traddr": "10.0.0.3", 00:11:59.633 "trsvcid": "4420" 00:11:59.633 }, 00:11:59.633 "peer_address": { 00:11:59.633 "trtype": "TCP", 00:11:59.633 "adrfam": "IPv4", 00:11:59.633 "traddr": "10.0.0.1", 00:11:59.634 "trsvcid": "55970" 00:11:59.634 }, 00:11:59.634 "auth": { 00:11:59.634 "state": "completed", 00:11:59.634 "digest": "sha256", 00:11:59.634 "dhgroup": "ffdhe6144" 00:11:59.634 } 00:11:59.634 } 00:11:59.634 ]' 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.634 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.892 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:11:59.892 14:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.830 14:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.767 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.767 { 00:12:01.767 "cntlid": 41, 00:12:01.767 "qid": 0, 00:12:01.767 "state": "enabled", 00:12:01.767 "thread": "nvmf_tgt_poll_group_000", 00:12:01.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:01.767 "listen_address": { 00:12:01.767 "trtype": "TCP", 00:12:01.767 "adrfam": "IPv4", 00:12:01.767 "traddr": "10.0.0.3", 00:12:01.767 "trsvcid": "4420" 00:12:01.767 }, 00:12:01.767 "peer_address": { 00:12:01.767 "trtype": "TCP", 00:12:01.767 "adrfam": "IPv4", 00:12:01.767 "traddr": "10.0.0.1", 00:12:01.767 "trsvcid": "55978" 00:12:01.767 }, 00:12:01.767 "auth": { 00:12:01.767 "state": "completed", 00:12:01.767 "digest": "sha256", 00:12:01.767 "dhgroup": "ffdhe8192" 00:12:01.767 } 00:12:01.767 } 00:12:01.767 ]' 00:12:01.767 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.026 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.321 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:02.321 14:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:02.889 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.148 14:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.715 00:12:03.715 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.715 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.715 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.974 { 00:12:03.974 "cntlid": 43, 00:12:03.974 "qid": 0, 00:12:03.974 "state": "enabled", 00:12:03.974 "thread": "nvmf_tgt_poll_group_000", 00:12:03.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:03.974 "listen_address": { 00:12:03.974 "trtype": "TCP", 00:12:03.974 "adrfam": "IPv4", 00:12:03.974 "traddr": "10.0.0.3", 00:12:03.974 "trsvcid": "4420" 00:12:03.974 }, 00:12:03.974 "peer_address": { 00:12:03.974 "trtype": "TCP", 00:12:03.974 "adrfam": "IPv4", 00:12:03.974 "traddr": "10.0.0.1", 00:12:03.974 "trsvcid": "44554" 00:12:03.974 }, 00:12:03.974 "auth": { 00:12:03.974 "state": "completed", 00:12:03.974 "digest": "sha256", 00:12:03.974 "dhgroup": "ffdhe8192" 00:12:03.974 } 00:12:03.974 } 00:12:03.974 ]' 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.974 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.233 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.233 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.233 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.233 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.233 14:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.492 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:04.492 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:05.059 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.577 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.577 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.577 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.577 14:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.145 00:12:06.145 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.145 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.145 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.404 { 00:12:06.404 "cntlid": 45, 00:12:06.404 "qid": 0, 00:12:06.404 "state": "enabled", 00:12:06.404 "thread": "nvmf_tgt_poll_group_000", 00:12:06.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:06.404 "listen_address": { 00:12:06.404 "trtype": "TCP", 00:12:06.404 "adrfam": "IPv4", 00:12:06.404 "traddr": "10.0.0.3", 00:12:06.404 "trsvcid": "4420" 00:12:06.404 }, 00:12:06.404 "peer_address": { 00:12:06.404 "trtype": "TCP", 00:12:06.404 "adrfam": "IPv4", 00:12:06.404 "traddr": "10.0.0.1", 00:12:06.404 "trsvcid": "44590" 00:12:06.404 }, 00:12:06.404 "auth": { 00:12:06.404 "state": "completed", 00:12:06.404 "digest": "sha256", 00:12:06.404 "dhgroup": "ffdhe8192" 00:12:06.404 } 00:12:06.404 } 00:12:06.404 ]' 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.404 14:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.663 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:06.663 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:07.232 14:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.491 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.059 00:12:08.059 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.059 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.059 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.318 { 00:12:08.318 "cntlid": 47, 00:12:08.318 "qid": 0, 00:12:08.318 "state": "enabled", 00:12:08.318 "thread": "nvmf_tgt_poll_group_000", 00:12:08.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:08.318 "listen_address": { 00:12:08.318 "trtype": "TCP", 00:12:08.318 "adrfam": "IPv4", 00:12:08.318 "traddr": "10.0.0.3", 00:12:08.318 "trsvcid": "4420" 00:12:08.318 }, 00:12:08.318 "peer_address": { 00:12:08.318 "trtype": "TCP", 00:12:08.318 "adrfam": "IPv4", 00:12:08.318 "traddr": "10.0.0.1", 00:12:08.318 "trsvcid": "44604" 00:12:08.318 }, 00:12:08.318 "auth": { 00:12:08.318 "state": "completed", 00:12:08.318 "digest": "sha256", 00:12:08.318 "dhgroup": "ffdhe8192" 00:12:08.318 } 00:12:08.318 } 00:12:08.318 ]' 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.318 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.577 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:08.577 14:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.577 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.577 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.577 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.900 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:08.900 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.475 14:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.738 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.997 00:12:09.997 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.997 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.998 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.257 { 00:12:10.257 "cntlid": 49, 00:12:10.257 "qid": 0, 00:12:10.257 "state": "enabled", 00:12:10.257 "thread": "nvmf_tgt_poll_group_000", 00:12:10.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:10.257 "listen_address": { 00:12:10.257 "trtype": "TCP", 00:12:10.257 "adrfam": "IPv4", 00:12:10.257 "traddr": "10.0.0.3", 00:12:10.257 "trsvcid": "4420" 00:12:10.257 }, 00:12:10.257 "peer_address": { 00:12:10.257 "trtype": "TCP", 00:12:10.257 "adrfam": "IPv4", 00:12:10.257 "traddr": "10.0.0.1", 00:12:10.257 "trsvcid": "44622" 00:12:10.257 }, 00:12:10.257 "auth": { 00:12:10.257 "state": "completed", 00:12:10.257 "digest": "sha384", 00:12:10.257 "dhgroup": "null" 00:12:10.257 } 00:12:10.257 } 00:12:10.257 ]' 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.257 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.516 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:10.516 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.516 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.516 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.516 14:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.775 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:10.775 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:11.343 14:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.603 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.862 00:12:11.862 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.862 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.862 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.120 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.120 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.120 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.120 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.379 { 00:12:12.379 "cntlid": 51, 00:12:12.379 "qid": 0, 00:12:12.379 "state": "enabled", 00:12:12.379 "thread": "nvmf_tgt_poll_group_000", 00:12:12.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:12.379 "listen_address": { 00:12:12.379 "trtype": "TCP", 00:12:12.379 "adrfam": "IPv4", 00:12:12.379 "traddr": "10.0.0.3", 00:12:12.379 "trsvcid": "4420" 00:12:12.379 }, 00:12:12.379 "peer_address": { 00:12:12.379 "trtype": "TCP", 00:12:12.379 "adrfam": "IPv4", 00:12:12.379 "traddr": "10.0.0.1", 00:12:12.379 "trsvcid": "43710" 00:12:12.379 }, 00:12:12.379 "auth": { 00:12:12.379 "state": "completed", 00:12:12.379 "digest": "sha384", 00:12:12.379 "dhgroup": "null" 00:12:12.379 } 00:12:12.379 } 00:12:12.379 ]' 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.379 14:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.640 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:12.640 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:13.208 14:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.467 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.034 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.034 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.294 { 00:12:14.294 "cntlid": 53, 00:12:14.294 "qid": 0, 00:12:14.294 "state": "enabled", 00:12:14.294 "thread": "nvmf_tgt_poll_group_000", 00:12:14.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:14.294 "listen_address": { 00:12:14.294 "trtype": "TCP", 00:12:14.294 "adrfam": "IPv4", 00:12:14.294 "traddr": "10.0.0.3", 00:12:14.294 "trsvcid": "4420" 00:12:14.294 }, 00:12:14.294 "peer_address": { 00:12:14.294 "trtype": "TCP", 00:12:14.294 "adrfam": "IPv4", 00:12:14.294 "traddr": "10.0.0.1", 00:12:14.294 "trsvcid": "43742" 00:12:14.294 }, 00:12:14.294 "auth": { 00:12:14.294 "state": "completed", 00:12:14.294 "digest": "sha384", 00:12:14.294 "dhgroup": "null" 00:12:14.294 } 00:12:14.294 } 00:12:14.294 ]' 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.294 14:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.552 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:14.552 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:15.120 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.379 14:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.638 00:12:15.638 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.638 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.638 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.897 { 00:12:15.897 "cntlid": 55, 00:12:15.897 "qid": 0, 00:12:15.897 "state": "enabled", 00:12:15.897 "thread": "nvmf_tgt_poll_group_000", 00:12:15.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:15.897 "listen_address": { 00:12:15.897 "trtype": "TCP", 00:12:15.897 "adrfam": "IPv4", 00:12:15.897 "traddr": "10.0.0.3", 00:12:15.897 "trsvcid": "4420" 00:12:15.897 }, 00:12:15.897 "peer_address": { 00:12:15.897 "trtype": "TCP", 00:12:15.897 "adrfam": "IPv4", 00:12:15.897 "traddr": "10.0.0.1", 00:12:15.897 "trsvcid": "43760" 00:12:15.897 }, 00:12:15.897 "auth": { 00:12:15.897 "state": "completed", 00:12:15.897 "digest": "sha384", 00:12:15.897 "dhgroup": "null" 00:12:15.897 } 00:12:15.897 } 00:12:15.897 ]' 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:15.897 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.157 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.157 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.157 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.416 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:16.416 14:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:16.985 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.244 14:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.502 00:12:17.502 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.502 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.502 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.761 { 00:12:17.761 "cntlid": 57, 00:12:17.761 "qid": 0, 00:12:17.761 "state": "enabled", 00:12:17.761 "thread": "nvmf_tgt_poll_group_000", 00:12:17.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:17.761 "listen_address": { 00:12:17.761 "trtype": "TCP", 00:12:17.761 "adrfam": "IPv4", 00:12:17.761 "traddr": "10.0.0.3", 00:12:17.761 "trsvcid": "4420" 00:12:17.761 }, 00:12:17.761 "peer_address": { 00:12:17.761 "trtype": "TCP", 00:12:17.761 "adrfam": "IPv4", 00:12:17.761 "traddr": "10.0.0.1", 00:12:17.761 "trsvcid": "43774" 00:12:17.761 }, 00:12:17.761 "auth": { 00:12:17.761 "state": "completed", 00:12:17.761 "digest": "sha384", 00:12:17.761 "dhgroup": "ffdhe2048" 00:12:17.761 } 00:12:17.761 } 00:12:17.761 ]' 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.761 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.020 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.020 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.020 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.020 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.020 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.279 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:18.279 14:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.848 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.107 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.366 00:12:19.366 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.366 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.366 14:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.625 { 00:12:19.625 "cntlid": 59, 00:12:19.625 "qid": 0, 00:12:19.625 "state": "enabled", 00:12:19.625 "thread": "nvmf_tgt_poll_group_000", 00:12:19.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:19.625 "listen_address": { 00:12:19.625 "trtype": "TCP", 00:12:19.625 "adrfam": "IPv4", 00:12:19.625 "traddr": "10.0.0.3", 00:12:19.625 "trsvcid": "4420" 00:12:19.625 }, 00:12:19.625 "peer_address": { 00:12:19.625 "trtype": "TCP", 00:12:19.625 "adrfam": "IPv4", 00:12:19.625 "traddr": "10.0.0.1", 00:12:19.625 "trsvcid": "43806" 00:12:19.625 }, 00:12:19.625 "auth": { 00:12:19.625 "state": "completed", 00:12:19.625 "digest": "sha384", 00:12:19.625 "dhgroup": "ffdhe2048" 00:12:19.625 } 00:12:19.625 } 00:12:19.625 ]' 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.625 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.885 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.885 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.885 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.144 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:20.144 14:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:20.711 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.712 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.971 00:12:21.231 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.231 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.231 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.489 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.489 { 00:12:21.489 "cntlid": 61, 00:12:21.489 "qid": 0, 00:12:21.489 "state": "enabled", 00:12:21.489 "thread": "nvmf_tgt_poll_group_000", 00:12:21.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:21.489 "listen_address": { 00:12:21.489 "trtype": "TCP", 00:12:21.489 "adrfam": "IPv4", 00:12:21.489 "traddr": "10.0.0.3", 00:12:21.489 "trsvcid": "4420" 00:12:21.489 }, 00:12:21.489 "peer_address": { 00:12:21.489 "trtype": "TCP", 00:12:21.489 "adrfam": "IPv4", 00:12:21.489 "traddr": "10.0.0.1", 00:12:21.489 "trsvcid": "43848" 00:12:21.489 }, 00:12:21.489 "auth": { 00:12:21.489 "state": "completed", 00:12:21.489 "digest": "sha384", 00:12:21.489 "dhgroup": "ffdhe2048" 00:12:21.489 } 00:12:21.490 } 00:12:21.490 ]' 00:12:21.490 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.490 14:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.490 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.748 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:21.748 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:22.317 14:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.575 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.576 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.576 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.142 00:12:23.143 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.143 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.143 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.402 { 00:12:23.402 "cntlid": 63, 00:12:23.402 "qid": 0, 00:12:23.402 "state": "enabled", 00:12:23.402 "thread": "nvmf_tgt_poll_group_000", 00:12:23.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:23.402 "listen_address": { 00:12:23.402 "trtype": "TCP", 00:12:23.402 "adrfam": "IPv4", 00:12:23.402 "traddr": "10.0.0.3", 00:12:23.402 "trsvcid": "4420" 00:12:23.402 }, 00:12:23.402 "peer_address": { 00:12:23.402 "trtype": "TCP", 00:12:23.402 "adrfam": "IPv4", 00:12:23.402 "traddr": "10.0.0.1", 00:12:23.402 "trsvcid": "57858" 00:12:23.402 }, 00:12:23.402 "auth": { 00:12:23.402 "state": "completed", 00:12:23.402 "digest": "sha384", 00:12:23.402 "dhgroup": "ffdhe2048" 00:12:23.402 } 00:12:23.402 } 00:12:23.402 ]' 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.402 14:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.661 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:23.661 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:24.229 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.488 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.758 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.053 00:12:25.053 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.053 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.054 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.312 { 00:12:25.312 "cntlid": 65, 00:12:25.312 "qid": 0, 00:12:25.312 "state": "enabled", 00:12:25.312 "thread": "nvmf_tgt_poll_group_000", 00:12:25.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:25.312 "listen_address": { 00:12:25.312 "trtype": "TCP", 00:12:25.312 "adrfam": "IPv4", 00:12:25.312 "traddr": "10.0.0.3", 00:12:25.312 "trsvcid": "4420" 00:12:25.312 }, 00:12:25.312 "peer_address": { 00:12:25.312 "trtype": "TCP", 00:12:25.312 "adrfam": "IPv4", 00:12:25.312 "traddr": "10.0.0.1", 00:12:25.312 "trsvcid": "57892" 00:12:25.312 }, 00:12:25.312 "auth": { 00:12:25.312 "state": "completed", 00:12:25.312 "digest": "sha384", 00:12:25.312 "dhgroup": "ffdhe3072" 00:12:25.312 } 00:12:25.312 } 00:12:25.312 ]' 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.312 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.571 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:25.571 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.139 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.398 14:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.657 00:12:26.657 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.657 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.657 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.915 { 00:12:26.915 "cntlid": 67, 00:12:26.915 "qid": 0, 00:12:26.915 "state": "enabled", 00:12:26.915 "thread": "nvmf_tgt_poll_group_000", 00:12:26.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:26.915 "listen_address": { 00:12:26.915 "trtype": "TCP", 00:12:26.915 "adrfam": "IPv4", 00:12:26.915 "traddr": "10.0.0.3", 00:12:26.915 "trsvcid": "4420" 00:12:26.915 }, 00:12:26.915 "peer_address": { 00:12:26.915 "trtype": "TCP", 00:12:26.915 "adrfam": "IPv4", 00:12:26.915 "traddr": "10.0.0.1", 00:12:26.915 "trsvcid": "57916" 00:12:26.915 }, 00:12:26.915 "auth": { 00:12:26.915 "state": "completed", 00:12:26.915 "digest": "sha384", 00:12:26.915 "dhgroup": "ffdhe3072" 00:12:26.915 } 00:12:26.915 } 00:12:26.915 ]' 00:12:26.915 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.174 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.433 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:27.433 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.369 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.936 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.936 { 00:12:28.936 "cntlid": 69, 00:12:28.936 "qid": 0, 00:12:28.936 "state": "enabled", 00:12:28.936 "thread": "nvmf_tgt_poll_group_000", 00:12:28.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:28.936 "listen_address": { 00:12:28.936 "trtype": "TCP", 00:12:28.936 "adrfam": "IPv4", 00:12:28.936 "traddr": "10.0.0.3", 00:12:28.936 "trsvcid": "4420" 00:12:28.936 }, 00:12:28.936 "peer_address": { 00:12:28.936 "trtype": "TCP", 00:12:28.936 "adrfam": "IPv4", 00:12:28.936 "traddr": "10.0.0.1", 00:12:28.936 "trsvcid": "57944" 00:12:28.936 }, 00:12:28.936 "auth": { 00:12:28.936 "state": "completed", 00:12:28.936 "digest": "sha384", 00:12:28.936 "dhgroup": "ffdhe3072" 00:12:28.936 } 00:12:28.936 } 00:12:28.936 ]' 00:12:28.936 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.194 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.453 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:29.453 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:30.020 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.279 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.537 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.796 { 00:12:30.796 "cntlid": 71, 00:12:30.796 "qid": 0, 00:12:30.796 "state": "enabled", 00:12:30.796 "thread": "nvmf_tgt_poll_group_000", 00:12:30.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:30.796 "listen_address": { 00:12:30.796 "trtype": "TCP", 00:12:30.796 "adrfam": "IPv4", 00:12:30.796 "traddr": "10.0.0.3", 00:12:30.796 "trsvcid": "4420" 00:12:30.796 }, 00:12:30.796 "peer_address": { 00:12:30.796 "trtype": "TCP", 00:12:30.796 "adrfam": "IPv4", 00:12:30.796 "traddr": "10.0.0.1", 00:12:30.796 "trsvcid": "57962" 00:12:30.796 }, 00:12:30.796 "auth": { 00:12:30.796 "state": "completed", 00:12:30.796 "digest": "sha384", 00:12:30.796 "dhgroup": "ffdhe3072" 00:12:30.796 } 00:12:30.796 } 00:12:30.796 ]' 00:12:30.796 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.055 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.314 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:31.314 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:31.881 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.141 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.400 00:12:32.658 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.658 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.658 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.918 { 00:12:32.918 "cntlid": 73, 00:12:32.918 "qid": 0, 00:12:32.918 "state": "enabled", 00:12:32.918 "thread": "nvmf_tgt_poll_group_000", 00:12:32.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:32.918 "listen_address": { 00:12:32.918 "trtype": "TCP", 00:12:32.918 "adrfam": "IPv4", 00:12:32.918 "traddr": "10.0.0.3", 00:12:32.918 "trsvcid": "4420" 00:12:32.918 }, 00:12:32.918 "peer_address": { 00:12:32.918 "trtype": "TCP", 00:12:32.918 "adrfam": "IPv4", 00:12:32.918 "traddr": "10.0.0.1", 00:12:32.918 "trsvcid": "46048" 00:12:32.918 }, 00:12:32.918 "auth": { 00:12:32.918 "state": "completed", 00:12:32.918 "digest": "sha384", 00:12:32.918 "dhgroup": "ffdhe4096" 00:12:32.918 } 00:12:32.918 } 00:12:32.918 ]' 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.918 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.177 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:33.177 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:33.744 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.744 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:33.744 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.744 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.003 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.003 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.003 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.003 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.262 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:34.262 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.263 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.522 00:12:34.522 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.522 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.522 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.780 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.780 { 00:12:34.781 "cntlid": 75, 00:12:34.781 "qid": 0, 00:12:34.781 "state": "enabled", 00:12:34.781 "thread": "nvmf_tgt_poll_group_000", 00:12:34.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:34.781 "listen_address": { 00:12:34.781 "trtype": "TCP", 00:12:34.781 "adrfam": "IPv4", 00:12:34.781 "traddr": "10.0.0.3", 00:12:34.781 "trsvcid": "4420" 00:12:34.781 }, 00:12:34.781 "peer_address": { 00:12:34.781 "trtype": "TCP", 00:12:34.781 "adrfam": "IPv4", 00:12:34.781 "traddr": "10.0.0.1", 00:12:34.781 "trsvcid": "46072" 00:12:34.781 }, 00:12:34.781 "auth": { 00:12:34.781 "state": "completed", 00:12:34.781 "digest": "sha384", 00:12:34.781 "dhgroup": "ffdhe4096" 00:12:34.781 } 00:12:34.781 } 00:12:34.781 ]' 00:12:34.781 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.781 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.781 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.781 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.781 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.039 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.039 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.039 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.298 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:35.298 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:35.865 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.123 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.381 00:12:36.381 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.381 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.381 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.639 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.640 { 00:12:36.640 "cntlid": 77, 00:12:36.640 "qid": 0, 00:12:36.640 "state": "enabled", 00:12:36.640 "thread": "nvmf_tgt_poll_group_000", 00:12:36.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:36.640 "listen_address": { 00:12:36.640 "trtype": "TCP", 00:12:36.640 "adrfam": "IPv4", 00:12:36.640 "traddr": "10.0.0.3", 00:12:36.640 "trsvcid": "4420" 00:12:36.640 }, 00:12:36.640 "peer_address": { 00:12:36.640 "trtype": "TCP", 00:12:36.640 "adrfam": "IPv4", 00:12:36.640 "traddr": "10.0.0.1", 00:12:36.640 "trsvcid": "46080" 00:12:36.640 }, 00:12:36.640 "auth": { 00:12:36.640 "state": "completed", 00:12:36.640 "digest": "sha384", 00:12:36.640 "dhgroup": "ffdhe4096" 00:12:36.640 } 00:12:36.640 } 00:12:36.640 ]' 00:12:36.640 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.949 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.207 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:37.207 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:37.772 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.030 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.598 00:12:38.598 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.598 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.598 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.598 { 00:12:38.598 "cntlid": 79, 00:12:38.598 "qid": 0, 00:12:38.598 "state": "enabled", 00:12:38.598 "thread": "nvmf_tgt_poll_group_000", 00:12:38.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:38.598 "listen_address": { 00:12:38.598 "trtype": "TCP", 00:12:38.598 "adrfam": "IPv4", 00:12:38.598 "traddr": "10.0.0.3", 00:12:38.598 "trsvcid": "4420" 00:12:38.598 }, 00:12:38.598 "peer_address": { 00:12:38.598 "trtype": "TCP", 00:12:38.598 "adrfam": "IPv4", 00:12:38.598 "traddr": "10.0.0.1", 00:12:38.598 "trsvcid": "46104" 00:12:38.598 }, 00:12:38.598 "auth": { 00:12:38.598 "state": "completed", 00:12:38.598 "digest": "sha384", 00:12:38.598 "dhgroup": "ffdhe4096" 00:12:38.598 } 00:12:38.598 } 00:12:38.598 ]' 00:12:38.598 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.858 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.117 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:39.117 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.685 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.944 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.514 00:12:40.514 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.514 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.514 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.514 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.514 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.514 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.514 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.774 { 00:12:40.774 "cntlid": 81, 00:12:40.774 "qid": 0, 00:12:40.774 "state": "enabled", 00:12:40.774 "thread": "nvmf_tgt_poll_group_000", 00:12:40.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:40.774 "listen_address": { 00:12:40.774 "trtype": "TCP", 00:12:40.774 "adrfam": "IPv4", 00:12:40.774 "traddr": "10.0.0.3", 00:12:40.774 "trsvcid": "4420" 00:12:40.774 }, 00:12:40.774 "peer_address": { 00:12:40.774 "trtype": "TCP", 00:12:40.774 "adrfam": "IPv4", 00:12:40.774 "traddr": "10.0.0.1", 00:12:40.774 "trsvcid": "46130" 00:12:40.774 }, 00:12:40.774 "auth": { 00:12:40.774 "state": "completed", 00:12:40.774 "digest": "sha384", 00:12:40.774 "dhgroup": "ffdhe6144" 00:12:40.774 } 00:12:40.774 } 00:12:40.774 ]' 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.774 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.033 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:41.033 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:41.999 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.000 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.567 00:12:42.567 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.567 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.567 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.826 { 00:12:42.826 "cntlid": 83, 00:12:42.826 "qid": 0, 00:12:42.826 "state": "enabled", 00:12:42.826 "thread": "nvmf_tgt_poll_group_000", 00:12:42.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:42.826 "listen_address": { 00:12:42.826 "trtype": "TCP", 00:12:42.826 "adrfam": "IPv4", 00:12:42.826 "traddr": "10.0.0.3", 00:12:42.826 "trsvcid": "4420" 00:12:42.826 }, 00:12:42.826 "peer_address": { 00:12:42.826 "trtype": "TCP", 00:12:42.826 "adrfam": "IPv4", 00:12:42.826 "traddr": "10.0.0.1", 00:12:42.826 "trsvcid": "53092" 00:12:42.826 }, 00:12:42.826 "auth": { 00:12:42.826 "state": "completed", 00:12:42.826 "digest": "sha384", 00:12:42.826 "dhgroup": "ffdhe6144" 00:12:42.826 } 00:12:42.826 } 00:12:42.826 ]' 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.826 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.085 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.085 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.085 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.085 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:43.085 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.653 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.912 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.479 00:12:44.479 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.479 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.479 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.738 { 00:12:44.738 "cntlid": 85, 00:12:44.738 "qid": 0, 00:12:44.738 "state": "enabled", 00:12:44.738 "thread": "nvmf_tgt_poll_group_000", 00:12:44.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:44.738 "listen_address": { 00:12:44.738 "trtype": "TCP", 00:12:44.738 "adrfam": "IPv4", 00:12:44.738 "traddr": "10.0.0.3", 00:12:44.738 "trsvcid": "4420" 00:12:44.738 }, 00:12:44.738 "peer_address": { 00:12:44.738 "trtype": "TCP", 00:12:44.738 "adrfam": "IPv4", 00:12:44.738 "traddr": "10.0.0.1", 00:12:44.738 "trsvcid": "53120" 00:12:44.738 }, 00:12:44.738 "auth": { 00:12:44.738 "state": "completed", 00:12:44.738 "digest": "sha384", 00:12:44.738 "dhgroup": "ffdhe6144" 00:12:44.738 } 00:12:44.738 } 00:12:44.738 ]' 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.738 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.996 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:44.996 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:45.933 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.934 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.502 00:12:46.502 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.502 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.502 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.762 { 00:12:46.762 "cntlid": 87, 00:12:46.762 "qid": 0, 00:12:46.762 "state": "enabled", 00:12:46.762 "thread": "nvmf_tgt_poll_group_000", 00:12:46.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:46.762 "listen_address": { 00:12:46.762 "trtype": "TCP", 00:12:46.762 "adrfam": "IPv4", 00:12:46.762 "traddr": "10.0.0.3", 00:12:46.762 "trsvcid": "4420" 00:12:46.762 }, 00:12:46.762 "peer_address": { 00:12:46.762 "trtype": "TCP", 00:12:46.762 "adrfam": "IPv4", 00:12:46.762 "traddr": "10.0.0.1", 00:12:46.762 "trsvcid": "53154" 00:12:46.762 }, 00:12:46.762 "auth": { 00:12:46.762 "state": "completed", 00:12:46.762 "digest": "sha384", 00:12:46.762 "dhgroup": "ffdhe6144" 00:12:46.762 } 00:12:46.762 } 00:12:46.762 ]' 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.762 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.021 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:47.021 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.021 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.021 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.021 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.280 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:47.280 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.848 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.108 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.675 00:12:48.675 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.675 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.675 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.934 { 00:12:48.934 "cntlid": 89, 00:12:48.934 "qid": 0, 00:12:48.934 "state": "enabled", 00:12:48.934 "thread": "nvmf_tgt_poll_group_000", 00:12:48.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:48.934 "listen_address": { 00:12:48.934 "trtype": "TCP", 00:12:48.934 "adrfam": "IPv4", 00:12:48.934 "traddr": "10.0.0.3", 00:12:48.934 "trsvcid": "4420" 00:12:48.934 }, 00:12:48.934 "peer_address": { 00:12:48.934 "trtype": "TCP", 00:12:48.934 "adrfam": "IPv4", 00:12:48.934 "traddr": "10.0.0.1", 00:12:48.934 "trsvcid": "53182" 00:12:48.934 }, 00:12:48.934 "auth": { 00:12:48.934 "state": "completed", 00:12:48.934 "digest": "sha384", 00:12:48.934 "dhgroup": "ffdhe8192" 00:12:48.934 } 00:12:48.934 } 00:12:48.934 ]' 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.934 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.193 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:49.193 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.761 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.024 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.592 00:12:50.592 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.592 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.592 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.850 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.851 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.851 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.851 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.109 { 00:12:51.109 "cntlid": 91, 00:12:51.109 "qid": 0, 00:12:51.109 "state": "enabled", 00:12:51.109 "thread": "nvmf_tgt_poll_group_000", 00:12:51.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:51.109 "listen_address": { 00:12:51.109 "trtype": "TCP", 00:12:51.109 "adrfam": "IPv4", 00:12:51.109 "traddr": "10.0.0.3", 00:12:51.109 "trsvcid": "4420" 00:12:51.109 }, 00:12:51.109 "peer_address": { 00:12:51.109 "trtype": "TCP", 00:12:51.109 "adrfam": "IPv4", 00:12:51.109 "traddr": "10.0.0.1", 00:12:51.109 "trsvcid": "53206" 00:12:51.109 }, 00:12:51.109 "auth": { 00:12:51.109 "state": "completed", 00:12:51.109 "digest": "sha384", 00:12:51.109 "dhgroup": "ffdhe8192" 00:12:51.109 } 00:12:51.109 } 00:12:51.109 ]' 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.109 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.110 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.110 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.368 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:51.368 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.936 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.195 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.763 00:12:52.763 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.763 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.763 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.022 { 00:12:53.022 "cntlid": 93, 00:12:53.022 "qid": 0, 00:12:53.022 "state": "enabled", 00:12:53.022 "thread": "nvmf_tgt_poll_group_000", 00:12:53.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:53.022 "listen_address": { 00:12:53.022 "trtype": "TCP", 00:12:53.022 "adrfam": "IPv4", 00:12:53.022 "traddr": "10.0.0.3", 00:12:53.022 "trsvcid": "4420" 00:12:53.022 }, 00:12:53.022 "peer_address": { 00:12:53.022 "trtype": "TCP", 00:12:53.022 "adrfam": "IPv4", 00:12:53.022 "traddr": "10.0.0.1", 00:12:53.022 "trsvcid": "32818" 00:12:53.022 }, 00:12:53.022 "auth": { 00:12:53.022 "state": "completed", 00:12:53.022 "digest": "sha384", 00:12:53.022 "dhgroup": "ffdhe8192" 00:12:53.022 } 00:12:53.022 } 00:12:53.022 ]' 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.022 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.282 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.282 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.282 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.540 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:53.540 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:54.108 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.368 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.936 00:12:54.936 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.936 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.936 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.196 { 00:12:55.196 "cntlid": 95, 00:12:55.196 "qid": 0, 00:12:55.196 "state": "enabled", 00:12:55.196 "thread": "nvmf_tgt_poll_group_000", 00:12:55.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:55.196 "listen_address": { 00:12:55.196 "trtype": "TCP", 00:12:55.196 "adrfam": "IPv4", 00:12:55.196 "traddr": "10.0.0.3", 00:12:55.196 "trsvcid": "4420" 00:12:55.196 }, 00:12:55.196 "peer_address": { 00:12:55.196 "trtype": "TCP", 00:12:55.196 "adrfam": "IPv4", 00:12:55.196 "traddr": "10.0.0.1", 00:12:55.196 "trsvcid": "32848" 00:12:55.196 }, 00:12:55.196 "auth": { 00:12:55.196 "state": "completed", 00:12:55.196 "digest": "sha384", 00:12:55.196 "dhgroup": "ffdhe8192" 00:12:55.196 } 00:12:55.196 } 00:12:55.196 ]' 00:12:55.196 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.455 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.455 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.455 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.455 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.455 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.455 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.455 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.715 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:55.715 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.301 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.559 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.818 00:12:56.818 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.818 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.818 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.076 { 00:12:57.076 "cntlid": 97, 00:12:57.076 "qid": 0, 00:12:57.076 "state": "enabled", 00:12:57.076 "thread": "nvmf_tgt_poll_group_000", 00:12:57.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:57.076 "listen_address": { 00:12:57.076 "trtype": "TCP", 00:12:57.076 "adrfam": "IPv4", 00:12:57.076 "traddr": "10.0.0.3", 00:12:57.076 "trsvcid": "4420" 00:12:57.076 }, 00:12:57.076 "peer_address": { 00:12:57.076 "trtype": "TCP", 00:12:57.076 "adrfam": "IPv4", 00:12:57.076 "traddr": "10.0.0.1", 00:12:57.076 "trsvcid": "32878" 00:12:57.076 }, 00:12:57.076 "auth": { 00:12:57.076 "state": "completed", 00:12:57.076 "digest": "sha512", 00:12:57.076 "dhgroup": "null" 00:12:57.076 } 00:12:57.076 } 00:12:57.076 ]' 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.076 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.334 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:57.334 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.335 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.335 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.335 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.593 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:57.593 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.161 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.420 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.420 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.420 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.420 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.988 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.988 { 00:12:58.988 "cntlid": 99, 00:12:58.988 "qid": 0, 00:12:58.988 "state": "enabled", 00:12:58.988 "thread": "nvmf_tgt_poll_group_000", 00:12:58.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:12:58.988 "listen_address": { 00:12:58.988 "trtype": "TCP", 00:12:58.988 "adrfam": "IPv4", 00:12:58.988 "traddr": "10.0.0.3", 00:12:58.988 "trsvcid": "4420" 00:12:58.988 }, 00:12:58.988 "peer_address": { 00:12:58.988 "trtype": "TCP", 00:12:58.988 "adrfam": "IPv4", 00:12:58.988 "traddr": "10.0.0.1", 00:12:58.988 "trsvcid": "32904" 00:12:58.988 }, 00:12:58.988 "auth": { 00:12:58.988 "state": "completed", 00:12:58.988 "digest": "sha512", 00:12:58.988 "dhgroup": "null" 00:12:58.988 } 00:12:58.988 } 00:12:58.988 ]' 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.988 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.247 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:59.247 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.247 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.247 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.247 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.506 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:12:59.506 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.074 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.333 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.592 00:13:00.592 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.592 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.592 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.851 { 00:13:00.851 "cntlid": 101, 00:13:00.851 "qid": 0, 00:13:00.851 "state": "enabled", 00:13:00.851 "thread": "nvmf_tgt_poll_group_000", 00:13:00.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:00.851 "listen_address": { 00:13:00.851 "trtype": "TCP", 00:13:00.851 "adrfam": "IPv4", 00:13:00.851 "traddr": "10.0.0.3", 00:13:00.851 "trsvcid": "4420" 00:13:00.851 }, 00:13:00.851 "peer_address": { 00:13:00.851 "trtype": "TCP", 00:13:00.851 "adrfam": "IPv4", 00:13:00.851 "traddr": "10.0.0.1", 00:13:00.851 "trsvcid": "32932" 00:13:00.851 }, 00:13:00.851 "auth": { 00:13:00.851 "state": "completed", 00:13:00.851 "digest": "sha512", 00:13:00.851 "dhgroup": "null" 00:13:00.851 } 00:13:00.851 } 00:13:00.851 ]' 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.851 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.110 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:01.110 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.110 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.110 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.110 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.369 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:01.369 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:01.936 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.195 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.454 00:13:02.454 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.454 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.454 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.712 { 00:13:02.712 "cntlid": 103, 00:13:02.712 "qid": 0, 00:13:02.712 "state": "enabled", 00:13:02.712 "thread": "nvmf_tgt_poll_group_000", 00:13:02.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:02.712 "listen_address": { 00:13:02.712 "trtype": "TCP", 00:13:02.712 "adrfam": "IPv4", 00:13:02.712 "traddr": "10.0.0.3", 00:13:02.712 "trsvcid": "4420" 00:13:02.712 }, 00:13:02.712 "peer_address": { 00:13:02.712 "trtype": "TCP", 00:13:02.712 "adrfam": "IPv4", 00:13:02.712 "traddr": "10.0.0.1", 00:13:02.712 "trsvcid": "53108" 00:13:02.712 }, 00:13:02.712 "auth": { 00:13:02.712 "state": "completed", 00:13:02.712 "digest": "sha512", 00:13:02.712 "dhgroup": "null" 00:13:02.712 } 00:13:02.712 } 00:13:02.712 ]' 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:02.712 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.971 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.971 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.971 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.971 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:02.971 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.538 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.797 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.056 00:13:04.056 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.056 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.056 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.624 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.624 { 00:13:04.624 "cntlid": 105, 00:13:04.624 "qid": 0, 00:13:04.624 "state": "enabled", 00:13:04.624 "thread": "nvmf_tgt_poll_group_000", 00:13:04.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:04.624 "listen_address": { 00:13:04.624 "trtype": "TCP", 00:13:04.624 "adrfam": "IPv4", 00:13:04.624 "traddr": "10.0.0.3", 00:13:04.624 "trsvcid": "4420" 00:13:04.624 }, 00:13:04.624 "peer_address": { 00:13:04.624 "trtype": "TCP", 00:13:04.624 "adrfam": "IPv4", 00:13:04.624 "traddr": "10.0.0.1", 00:13:04.624 "trsvcid": "53138" 00:13:04.624 }, 00:13:04.624 "auth": { 00:13:04.624 "state": "completed", 00:13:04.624 "digest": "sha512", 00:13:04.624 "dhgroup": "ffdhe2048" 00:13:04.624 } 00:13:04.624 } 00:13:04.624 ]' 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.624 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.881 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:04.882 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.448 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.707 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.966 00:13:05.966 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.966 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.966 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.226 { 00:13:06.226 "cntlid": 107, 00:13:06.226 "qid": 0, 00:13:06.226 "state": "enabled", 00:13:06.226 "thread": "nvmf_tgt_poll_group_000", 00:13:06.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:06.226 "listen_address": { 00:13:06.226 "trtype": "TCP", 00:13:06.226 "adrfam": "IPv4", 00:13:06.226 "traddr": "10.0.0.3", 00:13:06.226 "trsvcid": "4420" 00:13:06.226 }, 00:13:06.226 "peer_address": { 00:13:06.226 "trtype": "TCP", 00:13:06.226 "adrfam": "IPv4", 00:13:06.226 "traddr": "10.0.0.1", 00:13:06.226 "trsvcid": "53162" 00:13:06.226 }, 00:13:06.226 "auth": { 00:13:06.226 "state": "completed", 00:13:06.226 "digest": "sha512", 00:13:06.226 "dhgroup": "ffdhe2048" 00:13:06.226 } 00:13:06.226 } 00:13:06.226 ]' 00:13:06.226 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.484 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.484 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.484 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.484 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.484 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.484 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.484 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.743 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:06.744 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.308 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.567 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.826 00:13:07.826 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.826 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.826 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.084 { 00:13:08.084 "cntlid": 109, 00:13:08.084 "qid": 0, 00:13:08.084 "state": "enabled", 00:13:08.084 "thread": "nvmf_tgt_poll_group_000", 00:13:08.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:08.084 "listen_address": { 00:13:08.084 "trtype": "TCP", 00:13:08.084 "adrfam": "IPv4", 00:13:08.084 "traddr": "10.0.0.3", 00:13:08.084 "trsvcid": "4420" 00:13:08.084 }, 00:13:08.084 "peer_address": { 00:13:08.084 "trtype": "TCP", 00:13:08.084 "adrfam": "IPv4", 00:13:08.084 "traddr": "10.0.0.1", 00:13:08.084 "trsvcid": "53172" 00:13:08.084 }, 00:13:08.084 "auth": { 00:13:08.084 "state": "completed", 00:13:08.084 "digest": "sha512", 00:13:08.084 "dhgroup": "ffdhe2048" 00:13:08.084 } 00:13:08.084 } 00:13:08.084 ]' 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.084 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.343 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.343 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.343 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.601 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:08.601 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:09.169 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.428 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.685 00:13:09.685 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.685 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.685 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.944 { 00:13:09.944 "cntlid": 111, 00:13:09.944 "qid": 0, 00:13:09.944 "state": "enabled", 00:13:09.944 "thread": "nvmf_tgt_poll_group_000", 00:13:09.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:09.944 "listen_address": { 00:13:09.944 "trtype": "TCP", 00:13:09.944 "adrfam": "IPv4", 00:13:09.944 "traddr": "10.0.0.3", 00:13:09.944 "trsvcid": "4420" 00:13:09.944 }, 00:13:09.944 "peer_address": { 00:13:09.944 "trtype": "TCP", 00:13:09.944 "adrfam": "IPv4", 00:13:09.944 "traddr": "10.0.0.1", 00:13:09.944 "trsvcid": "53186" 00:13:09.944 }, 00:13:09.944 "auth": { 00:13:09.944 "state": "completed", 00:13:09.944 "digest": "sha512", 00:13:09.944 "dhgroup": "ffdhe2048" 00:13:09.944 } 00:13:09.944 } 00:13:09.944 ]' 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.944 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.203 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.203 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.203 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.462 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:10.462 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:11.058 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.059 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.328 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.328 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.328 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.329 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.587 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.587 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.846 { 00:13:11.846 "cntlid": 113, 00:13:11.846 "qid": 0, 00:13:11.846 "state": "enabled", 00:13:11.846 "thread": "nvmf_tgt_poll_group_000", 00:13:11.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:11.846 "listen_address": { 00:13:11.846 "trtype": "TCP", 00:13:11.846 "adrfam": "IPv4", 00:13:11.846 "traddr": "10.0.0.3", 00:13:11.846 "trsvcid": "4420" 00:13:11.846 }, 00:13:11.846 "peer_address": { 00:13:11.846 "trtype": "TCP", 00:13:11.846 "adrfam": "IPv4", 00:13:11.846 "traddr": "10.0.0.1", 00:13:11.846 "trsvcid": "53216" 00:13:11.846 }, 00:13:11.846 "auth": { 00:13:11.846 "state": "completed", 00:13:11.846 "digest": "sha512", 00:13:11.846 "dhgroup": "ffdhe3072" 00:13:11.846 } 00:13:11.846 } 00:13:11.846 ]' 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.846 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.105 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:12.105 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.672 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.931 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.190 00:13:13.190 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.190 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.190 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.449 { 00:13:13.449 "cntlid": 115, 00:13:13.449 "qid": 0, 00:13:13.449 "state": "enabled", 00:13:13.449 "thread": "nvmf_tgt_poll_group_000", 00:13:13.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:13.449 "listen_address": { 00:13:13.449 "trtype": "TCP", 00:13:13.449 "adrfam": "IPv4", 00:13:13.449 "traddr": "10.0.0.3", 00:13:13.449 "trsvcid": "4420" 00:13:13.449 }, 00:13:13.449 "peer_address": { 00:13:13.449 "trtype": "TCP", 00:13:13.449 "adrfam": "IPv4", 00:13:13.449 "traddr": "10.0.0.1", 00:13:13.449 "trsvcid": "58612" 00:13:13.449 }, 00:13:13.449 "auth": { 00:13:13.449 "state": "completed", 00:13:13.449 "digest": "sha512", 00:13:13.449 "dhgroup": "ffdhe3072" 00:13:13.449 } 00:13:13.449 } 00:13:13.449 ]' 00:13:13.449 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.707 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.965 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:13.965 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.532 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:15.097 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:15.097 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.097 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.097 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:15.097 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.098 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.355 00:13:15.355 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.355 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.355 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.614 { 00:13:15.614 "cntlid": 117, 00:13:15.614 "qid": 0, 00:13:15.614 "state": "enabled", 00:13:15.614 "thread": "nvmf_tgt_poll_group_000", 00:13:15.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:15.614 "listen_address": { 00:13:15.614 "trtype": "TCP", 00:13:15.614 "adrfam": "IPv4", 00:13:15.614 "traddr": "10.0.0.3", 00:13:15.614 "trsvcid": "4420" 00:13:15.614 }, 00:13:15.614 "peer_address": { 00:13:15.614 "trtype": "TCP", 00:13:15.614 "adrfam": "IPv4", 00:13:15.614 "traddr": "10.0.0.1", 00:13:15.614 "trsvcid": "58638" 00:13:15.614 }, 00:13:15.614 "auth": { 00:13:15.614 "state": "completed", 00:13:15.614 "digest": "sha512", 00:13:15.614 "dhgroup": "ffdhe3072" 00:13:15.614 } 00:13:15.614 } 00:13:15.614 ]' 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.614 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.872 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:15.872 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:16.440 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.440 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.700 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.266 00:13:17.266 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.266 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.266 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.525 { 00:13:17.525 "cntlid": 119, 00:13:17.525 "qid": 0, 00:13:17.525 "state": "enabled", 00:13:17.525 "thread": "nvmf_tgt_poll_group_000", 00:13:17.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:17.525 "listen_address": { 00:13:17.525 "trtype": "TCP", 00:13:17.525 "adrfam": "IPv4", 00:13:17.525 "traddr": "10.0.0.3", 00:13:17.525 "trsvcid": "4420" 00:13:17.525 }, 00:13:17.525 "peer_address": { 00:13:17.525 "trtype": "TCP", 00:13:17.525 "adrfam": "IPv4", 00:13:17.525 "traddr": "10.0.0.1", 00:13:17.525 "trsvcid": "58666" 00:13:17.525 }, 00:13:17.525 "auth": { 00:13:17.525 "state": "completed", 00:13:17.525 "digest": "sha512", 00:13:17.525 "dhgroup": "ffdhe3072" 00:13:17.525 } 00:13:17.525 } 00:13:17.525 ]' 00:13:17.525 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.526 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.526 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.526 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.526 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.526 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.526 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.526 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.784 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:17.784 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:18.353 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.354 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.928 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.186 00:13:19.186 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.187 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.187 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.446 { 00:13:19.446 "cntlid": 121, 00:13:19.446 "qid": 0, 00:13:19.446 "state": "enabled", 00:13:19.446 "thread": "nvmf_tgt_poll_group_000", 00:13:19.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:19.446 "listen_address": { 00:13:19.446 "trtype": "TCP", 00:13:19.446 "adrfam": "IPv4", 00:13:19.446 "traddr": "10.0.0.3", 00:13:19.446 "trsvcid": "4420" 00:13:19.446 }, 00:13:19.446 "peer_address": { 00:13:19.446 "trtype": "TCP", 00:13:19.446 "adrfam": "IPv4", 00:13:19.446 "traddr": "10.0.0.1", 00:13:19.446 "trsvcid": "58696" 00:13:19.446 }, 00:13:19.446 "auth": { 00:13:19.446 "state": "completed", 00:13:19.446 "digest": "sha512", 00:13:19.446 "dhgroup": "ffdhe4096" 00:13:19.446 } 00:13:19.446 } 00:13:19.446 ]' 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.446 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.446 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.446 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.446 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.705 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:19.705 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:20.643 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.643 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:20.643 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.643 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.643 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.903 00:13:21.161 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.161 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.161 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.419 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.419 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.419 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.419 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.420 { 00:13:21.420 "cntlid": 123, 00:13:21.420 "qid": 0, 00:13:21.420 "state": "enabled", 00:13:21.420 "thread": "nvmf_tgt_poll_group_000", 00:13:21.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:21.420 "listen_address": { 00:13:21.420 "trtype": "TCP", 00:13:21.420 "adrfam": "IPv4", 00:13:21.420 "traddr": "10.0.0.3", 00:13:21.420 "trsvcid": "4420" 00:13:21.420 }, 00:13:21.420 "peer_address": { 00:13:21.420 "trtype": "TCP", 00:13:21.420 "adrfam": "IPv4", 00:13:21.420 "traddr": "10.0.0.1", 00:13:21.420 "trsvcid": "58738" 00:13:21.420 }, 00:13:21.420 "auth": { 00:13:21.420 "state": "completed", 00:13:21.420 "digest": "sha512", 00:13:21.420 "dhgroup": "ffdhe4096" 00:13:21.420 } 00:13:21.420 } 00:13:21.420 ]' 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.420 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.420 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.420 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.420 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.679 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:21.679 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.616 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.616 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.191 00:13:23.191 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.191 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.191 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.451 { 00:13:23.451 "cntlid": 125, 00:13:23.451 "qid": 0, 00:13:23.451 "state": "enabled", 00:13:23.451 "thread": "nvmf_tgt_poll_group_000", 00:13:23.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:23.451 "listen_address": { 00:13:23.451 "trtype": "TCP", 00:13:23.451 "adrfam": "IPv4", 00:13:23.451 "traddr": "10.0.0.3", 00:13:23.451 "trsvcid": "4420" 00:13:23.451 }, 00:13:23.451 "peer_address": { 00:13:23.451 "trtype": "TCP", 00:13:23.451 "adrfam": "IPv4", 00:13:23.451 "traddr": "10.0.0.1", 00:13:23.451 "trsvcid": "47332" 00:13:23.451 }, 00:13:23.451 "auth": { 00:13:23.451 "state": "completed", 00:13:23.451 "digest": "sha512", 00:13:23.451 "dhgroup": "ffdhe4096" 00:13:23.451 } 00:13:23.451 } 00:13:23.451 ]' 00:13:23.451 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.451 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.451 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.451 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.451 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.710 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.710 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.710 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.970 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:23.970 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.538 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.796 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:24.796 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.796 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.796 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.796 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.797 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.055 00:13:25.055 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.055 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.055 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.314 { 00:13:25.314 "cntlid": 127, 00:13:25.314 "qid": 0, 00:13:25.314 "state": "enabled", 00:13:25.314 "thread": "nvmf_tgt_poll_group_000", 00:13:25.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:25.314 "listen_address": { 00:13:25.314 "trtype": "TCP", 00:13:25.314 "adrfam": "IPv4", 00:13:25.314 "traddr": "10.0.0.3", 00:13:25.314 "trsvcid": "4420" 00:13:25.314 }, 00:13:25.314 "peer_address": { 00:13:25.314 "trtype": "TCP", 00:13:25.314 "adrfam": "IPv4", 00:13:25.314 "traddr": "10.0.0.1", 00:13:25.314 "trsvcid": "47352" 00:13:25.314 }, 00:13:25.314 "auth": { 00:13:25.314 "state": "completed", 00:13:25.314 "digest": "sha512", 00:13:25.314 "dhgroup": "ffdhe4096" 00:13:25.314 } 00:13:25.314 } 00:13:25.314 ]' 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:25.314 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.573 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.573 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.573 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.831 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:25.831 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.400 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.659 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.918 00:13:26.918 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.918 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.918 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.177 { 00:13:27.177 "cntlid": 129, 00:13:27.177 "qid": 0, 00:13:27.177 "state": "enabled", 00:13:27.177 "thread": "nvmf_tgt_poll_group_000", 00:13:27.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:27.177 "listen_address": { 00:13:27.177 "trtype": "TCP", 00:13:27.177 "adrfam": "IPv4", 00:13:27.177 "traddr": "10.0.0.3", 00:13:27.177 "trsvcid": "4420" 00:13:27.177 }, 00:13:27.177 "peer_address": { 00:13:27.177 "trtype": "TCP", 00:13:27.177 "adrfam": "IPv4", 00:13:27.177 "traddr": "10.0.0.1", 00:13:27.177 "trsvcid": "47374" 00:13:27.177 }, 00:13:27.177 "auth": { 00:13:27.177 "state": "completed", 00:13:27.177 "digest": "sha512", 00:13:27.177 "dhgroup": "ffdhe6144" 00:13:27.177 } 00:13:27.177 } 00:13:27.177 ]' 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.177 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.436 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.436 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.436 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.436 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.694 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:27.695 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.262 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.521 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.089 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.089 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.349 { 00:13:29.349 "cntlid": 131, 00:13:29.349 "qid": 0, 00:13:29.349 "state": "enabled", 00:13:29.349 "thread": "nvmf_tgt_poll_group_000", 00:13:29.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:29.349 "listen_address": { 00:13:29.349 "trtype": "TCP", 00:13:29.349 "adrfam": "IPv4", 00:13:29.349 "traddr": "10.0.0.3", 00:13:29.349 "trsvcid": "4420" 00:13:29.349 }, 00:13:29.349 "peer_address": { 00:13:29.349 "trtype": "TCP", 00:13:29.349 "adrfam": "IPv4", 00:13:29.349 "traddr": "10.0.0.1", 00:13:29.349 "trsvcid": "47396" 00:13:29.349 }, 00:13:29.349 "auth": { 00:13:29.349 "state": "completed", 00:13:29.349 "digest": "sha512", 00:13:29.349 "dhgroup": "ffdhe6144" 00:13:29.349 } 00:13:29.349 } 00:13:29.349 ]' 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.349 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.608 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:29.608 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.175 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.433 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.002 00:13:31.002 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.002 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.002 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.261 { 00:13:31.261 "cntlid": 133, 00:13:31.261 "qid": 0, 00:13:31.261 "state": "enabled", 00:13:31.261 "thread": "nvmf_tgt_poll_group_000", 00:13:31.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:31.261 "listen_address": { 00:13:31.261 "trtype": "TCP", 00:13:31.261 "adrfam": "IPv4", 00:13:31.261 "traddr": "10.0.0.3", 00:13:31.261 "trsvcid": "4420" 00:13:31.261 }, 00:13:31.261 "peer_address": { 00:13:31.261 "trtype": "TCP", 00:13:31.261 "adrfam": "IPv4", 00:13:31.261 "traddr": "10.0.0.1", 00:13:31.261 "trsvcid": "47444" 00:13:31.261 }, 00:13:31.261 "auth": { 00:13:31.261 "state": "completed", 00:13:31.261 "digest": "sha512", 00:13:31.261 "dhgroup": "ffdhe6144" 00:13:31.261 } 00:13:31.261 } 00:13:31.261 ]' 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.261 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.262 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.262 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.521 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:31.521 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.088 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.347 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.914 00:13:32.914 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.914 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.914 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.173 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.173 { 00:13:33.173 "cntlid": 135, 00:13:33.173 "qid": 0, 00:13:33.173 "state": "enabled", 00:13:33.173 "thread": "nvmf_tgt_poll_group_000", 00:13:33.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:33.174 "listen_address": { 00:13:33.174 "trtype": "TCP", 00:13:33.174 "adrfam": "IPv4", 00:13:33.174 "traddr": "10.0.0.3", 00:13:33.174 "trsvcid": "4420" 00:13:33.174 }, 00:13:33.174 "peer_address": { 00:13:33.174 "trtype": "TCP", 00:13:33.174 "adrfam": "IPv4", 00:13:33.174 "traddr": "10.0.0.1", 00:13:33.174 "trsvcid": "56920" 00:13:33.174 }, 00:13:33.174 "auth": { 00:13:33.174 "state": "completed", 00:13:33.174 "digest": "sha512", 00:13:33.174 "dhgroup": "ffdhe6144" 00:13:33.174 } 00:13:33.174 } 00:13:33.174 ]' 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.174 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.432 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:33.432 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:34.000 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.000 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:34.000 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.000 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.258 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.258 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.258 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.258 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:34.258 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.517 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.085 00:13:35.085 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.085 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.085 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.344 { 00:13:35.344 "cntlid": 137, 00:13:35.344 "qid": 0, 00:13:35.344 "state": "enabled", 00:13:35.344 "thread": "nvmf_tgt_poll_group_000", 00:13:35.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:35.344 "listen_address": { 00:13:35.344 "trtype": "TCP", 00:13:35.344 "adrfam": "IPv4", 00:13:35.344 "traddr": "10.0.0.3", 00:13:35.344 "trsvcid": "4420" 00:13:35.344 }, 00:13:35.344 "peer_address": { 00:13:35.344 "trtype": "TCP", 00:13:35.344 "adrfam": "IPv4", 00:13:35.344 "traddr": "10.0.0.1", 00:13:35.344 "trsvcid": "56954" 00:13:35.344 }, 00:13:35.344 "auth": { 00:13:35.344 "state": "completed", 00:13:35.344 "digest": "sha512", 00:13:35.344 "dhgroup": "ffdhe8192" 00:13:35.344 } 00:13:35.344 } 00:13:35.344 ]' 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.344 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.344 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.344 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.344 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.603 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:35.603 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:36.170 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.428 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.688 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.256 00:13:37.256 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.256 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.256 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.514 { 00:13:37.514 "cntlid": 139, 00:13:37.514 "qid": 0, 00:13:37.514 "state": "enabled", 00:13:37.514 "thread": "nvmf_tgt_poll_group_000", 00:13:37.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:37.514 "listen_address": { 00:13:37.514 "trtype": "TCP", 00:13:37.514 "adrfam": "IPv4", 00:13:37.514 "traddr": "10.0.0.3", 00:13:37.514 "trsvcid": "4420" 00:13:37.514 }, 00:13:37.514 "peer_address": { 00:13:37.514 "trtype": "TCP", 00:13:37.514 "adrfam": "IPv4", 00:13:37.514 "traddr": "10.0.0.1", 00:13:37.514 "trsvcid": "56978" 00:13:37.514 }, 00:13:37.514 "auth": { 00:13:37.514 "state": "completed", 00:13:37.514 "digest": "sha512", 00:13:37.514 "dhgroup": "ffdhe8192" 00:13:37.514 } 00:13:37.514 } 00:13:37.514 ]' 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.514 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.773 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.773 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.773 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.773 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:37.773 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: --dhchap-ctrl-secret DHHC-1:02:ZjIzZmQ4ZDVhYmEyZDUyNGFkMTY1NjVkYzM2MzFkYmYyY2M4ZDU1NWQyYmUwYTE1DTge4A==: 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.340 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.599 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.166 00:13:39.166 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.166 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.166 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.734 { 00:13:39.734 "cntlid": 141, 00:13:39.734 "qid": 0, 00:13:39.734 "state": "enabled", 00:13:39.734 "thread": "nvmf_tgt_poll_group_000", 00:13:39.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:39.734 "listen_address": { 00:13:39.734 "trtype": "TCP", 00:13:39.734 "adrfam": "IPv4", 00:13:39.734 "traddr": "10.0.0.3", 00:13:39.734 "trsvcid": "4420" 00:13:39.734 }, 00:13:39.734 "peer_address": { 00:13:39.734 "trtype": "TCP", 00:13:39.734 "adrfam": "IPv4", 00:13:39.734 "traddr": "10.0.0.1", 00:13:39.734 "trsvcid": "56992" 00:13:39.734 }, 00:13:39.734 "auth": { 00:13:39.734 "state": "completed", 00:13:39.734 "digest": "sha512", 00:13:39.734 "dhgroup": "ffdhe8192" 00:13:39.734 } 00:13:39.734 } 00:13:39.734 ]' 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.734 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.993 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:39.993 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:01:NzY4YTZlNDM2OWJiMGYwOTg5MmVjYmQ1MGMyNmM2Zjjvy1Bt: 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.560 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.818 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.458 00:13:41.458 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.458 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.458 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.458 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.458 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.458 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.458 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.716 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.716 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.716 { 00:13:41.716 "cntlid": 143, 00:13:41.716 "qid": 0, 00:13:41.716 "state": "enabled", 00:13:41.716 "thread": "nvmf_tgt_poll_group_000", 00:13:41.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:41.716 "listen_address": { 00:13:41.716 "trtype": "TCP", 00:13:41.716 "adrfam": "IPv4", 00:13:41.716 "traddr": "10.0.0.3", 00:13:41.716 "trsvcid": "4420" 00:13:41.716 }, 00:13:41.716 "peer_address": { 00:13:41.716 "trtype": "TCP", 00:13:41.716 "adrfam": "IPv4", 00:13:41.716 "traddr": "10.0.0.1", 00:13:41.716 "trsvcid": "57016" 00:13:41.717 }, 00:13:41.717 "auth": { 00:13:41.717 "state": "completed", 00:13:41.717 "digest": "sha512", 00:13:41.717 "dhgroup": "ffdhe8192" 00:13:41.717 } 00:13:41.717 } 00:13:41.717 ]' 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.717 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.976 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:41.976 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.913 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.481 00:13:43.481 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.481 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.481 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.740 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.740 { 00:13:43.740 "cntlid": 145, 00:13:43.740 "qid": 0, 00:13:43.740 "state": "enabled", 00:13:43.740 "thread": "nvmf_tgt_poll_group_000", 00:13:43.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:43.741 "listen_address": { 00:13:43.741 "trtype": "TCP", 00:13:43.741 "adrfam": "IPv4", 00:13:43.741 "traddr": "10.0.0.3", 00:13:43.741 "trsvcid": "4420" 00:13:43.741 }, 00:13:43.741 "peer_address": { 00:13:43.741 "trtype": "TCP", 00:13:43.741 "adrfam": "IPv4", 00:13:43.741 "traddr": "10.0.0.1", 00:13:43.741 "trsvcid": "47200" 00:13:43.741 }, 00:13:43.741 "auth": { 00:13:43.741 "state": "completed", 00:13:43.741 "digest": "sha512", 00:13:43.741 "dhgroup": "ffdhe8192" 00:13:43.741 } 00:13:43.741 } 00:13:43.741 ]' 00:13:43.741 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.000 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.260 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:44.260 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:00:MWMyMTE0MGNiZjg1NzUxMzY0OWVhMzkzZTU4ZDU3OGMwMmUxYjJmNWFmNWJmNTdkcquJPA==: --dhchap-ctrl-secret DHHC-1:03:ODk3ODFmZWM2ZDAzYmFhYTE3YjM0MGMzMGNjODMxMTNhM2ZkZDQyYTA1M2FmY2NjNDBjOWNkYTlkZjdhYTY1NH0gOe4=: 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:44.828 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:45.396 request: 00:13:45.396 { 00:13:45.396 "name": "nvme0", 00:13:45.396 "trtype": "tcp", 00:13:45.396 "traddr": "10.0.0.3", 00:13:45.396 "adrfam": "ipv4", 00:13:45.396 "trsvcid": "4420", 00:13:45.396 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:45.396 "prchk_reftag": false, 00:13:45.396 "prchk_guard": false, 00:13:45.396 "hdgst": false, 00:13:45.396 "ddgst": false, 00:13:45.396 "dhchap_key": "key2", 00:13:45.396 "allow_unrecognized_csi": false, 00:13:45.396 "method": "bdev_nvme_attach_controller", 00:13:45.396 "req_id": 1 00:13:45.396 } 00:13:45.396 Got JSON-RPC error response 00:13:45.396 response: 00:13:45.396 { 00:13:45.396 "code": -5, 00:13:45.396 "message": "Input/output error" 00:13:45.396 } 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.396 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:45.396 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:45.964 request: 00:13:45.964 { 00:13:45.964 "name": "nvme0", 00:13:45.964 "trtype": "tcp", 00:13:45.964 "traddr": "10.0.0.3", 00:13:45.964 "adrfam": "ipv4", 00:13:45.964 "trsvcid": "4420", 00:13:45.964 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:45.964 "prchk_reftag": false, 00:13:45.964 "prchk_guard": false, 00:13:45.964 "hdgst": false, 00:13:45.964 "ddgst": false, 00:13:45.964 "dhchap_key": "key1", 00:13:45.964 "dhchap_ctrlr_key": "ckey2", 00:13:45.964 "allow_unrecognized_csi": false, 00:13:45.964 "method": "bdev_nvme_attach_controller", 00:13:45.964 "req_id": 1 00:13:45.964 } 00:13:45.964 Got JSON-RPC error response 00:13:45.964 response: 00:13:45.964 { 00:13:45.964 "code": -5, 00:13:45.964 "message": "Input/output error" 00:13:45.964 } 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.964 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.965 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.534 request: 00:13:46.534 { 00:13:46.534 "name": "nvme0", 00:13:46.534 "trtype": "tcp", 00:13:46.534 "traddr": "10.0.0.3", 00:13:46.534 "adrfam": "ipv4", 00:13:46.534 "trsvcid": "4420", 00:13:46.534 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:46.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:46.534 "prchk_reftag": false, 00:13:46.534 "prchk_guard": false, 00:13:46.534 "hdgst": false, 00:13:46.534 "ddgst": false, 00:13:46.534 "dhchap_key": "key1", 00:13:46.534 "dhchap_ctrlr_key": "ckey1", 00:13:46.534 "allow_unrecognized_csi": false, 00:13:46.534 "method": "bdev_nvme_attach_controller", 00:13:46.534 "req_id": 1 00:13:46.534 } 00:13:46.534 Got JSON-RPC error response 00:13:46.534 response: 00:13:46.534 { 00:13:46.534 "code": -5, 00:13:46.534 "message": "Input/output error" 00:13:46.534 } 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67550 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67550 ']' 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67550 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67550 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.535 killing process with pid 67550 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67550' 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67550 00:13:46.535 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67550 00:13:46.794 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70516 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70516 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70516 ']' 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.795 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.731 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:47.731 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.731 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.731 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.990 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.990 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70516 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70516 ']' 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.991 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 null0 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fNK 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.EW6 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EW6 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.F5I 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.F9z ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F9z 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AM5 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.bNq ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bNq 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.weA 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.251 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.189 nvme0n1 00:13:49.189 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.189 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.189 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.448 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.448 { 00:13:49.448 "cntlid": 1, 00:13:49.448 "qid": 0, 00:13:49.448 "state": "enabled", 00:13:49.448 "thread": "nvmf_tgt_poll_group_000", 00:13:49.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:49.448 "listen_address": { 00:13:49.448 "trtype": "TCP", 00:13:49.448 "adrfam": "IPv4", 00:13:49.448 "traddr": "10.0.0.3", 00:13:49.448 "trsvcid": "4420" 00:13:49.448 }, 00:13:49.448 "peer_address": { 00:13:49.448 "trtype": "TCP", 00:13:49.448 "adrfam": "IPv4", 00:13:49.448 "traddr": "10.0.0.1", 00:13:49.448 "trsvcid": "47274" 00:13:49.448 }, 00:13:49.448 "auth": { 00:13:49.448 "state": "completed", 00:13:49.448 "digest": "sha512", 00:13:49.448 "dhgroup": "ffdhe8192" 00:13:49.448 } 00:13:49.448 } 00:13:49.448 ]' 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.448 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.707 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.707 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.707 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.707 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.707 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.966 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:49.966 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key3 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:50.533 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.101 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.360 request: 00:13:51.360 { 00:13:51.360 "name": "nvme0", 00:13:51.360 "trtype": "tcp", 00:13:51.360 "traddr": "10.0.0.3", 00:13:51.360 "adrfam": "ipv4", 00:13:51.360 "trsvcid": "4420", 00:13:51.360 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:51.360 "prchk_reftag": false, 00:13:51.360 "prchk_guard": false, 00:13:51.360 "hdgst": false, 00:13:51.360 "ddgst": false, 00:13:51.361 "dhchap_key": "key3", 00:13:51.361 "allow_unrecognized_csi": false, 00:13:51.361 "method": "bdev_nvme_attach_controller", 00:13:51.361 "req_id": 1 00:13:51.361 } 00:13:51.361 Got JSON-RPC error response 00:13:51.361 response: 00:13:51.361 { 00:13:51.361 "code": -5, 00:13:51.361 "message": "Input/output error" 00:13:51.361 } 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:51.361 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.620 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.880 request: 00:13:51.880 { 00:13:51.880 "name": "nvme0", 00:13:51.880 "trtype": "tcp", 00:13:51.880 "traddr": "10.0.0.3", 00:13:51.880 "adrfam": "ipv4", 00:13:51.880 "trsvcid": "4420", 00:13:51.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:51.880 "prchk_reftag": false, 00:13:51.880 "prchk_guard": false, 00:13:51.880 "hdgst": false, 00:13:51.880 "ddgst": false, 00:13:51.880 "dhchap_key": "key3", 00:13:51.880 "allow_unrecognized_csi": false, 00:13:51.880 "method": "bdev_nvme_attach_controller", 00:13:51.880 "req_id": 1 00:13:51.880 } 00:13:51.880 Got JSON-RPC error response 00:13:51.880 response: 00:13:51.880 { 00:13:51.880 "code": -5, 00:13:51.880 "message": "Input/output error" 00:13:51.880 } 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:51.880 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.139 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.708 request: 00:13:52.708 { 00:13:52.708 "name": "nvme0", 00:13:52.708 "trtype": "tcp", 00:13:52.708 "traddr": "10.0.0.3", 00:13:52.708 "adrfam": "ipv4", 00:13:52.708 "trsvcid": "4420", 00:13:52.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:52.708 "prchk_reftag": false, 00:13:52.708 "prchk_guard": false, 00:13:52.708 "hdgst": false, 00:13:52.708 "ddgst": false, 00:13:52.708 "dhchap_key": "key0", 00:13:52.708 "dhchap_ctrlr_key": "key1", 00:13:52.708 "allow_unrecognized_csi": false, 00:13:52.708 "method": "bdev_nvme_attach_controller", 00:13:52.708 "req_id": 1 00:13:52.708 } 00:13:52.708 Got JSON-RPC error response 00:13:52.708 response: 00:13:52.708 { 00:13:52.708 "code": -5, 00:13:52.708 "message": "Input/output error" 00:13:52.708 } 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:52.708 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:52.966 nvme0n1 00:13:52.966 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:52.966 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.966 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:53.225 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.225 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.225 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.483 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 00:13:53.483 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.483 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.483 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.484 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:53.484 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:53.484 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:54.421 nvme0n1 00:13:54.421 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:54.421 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:54.421 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.679 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:54.680 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.938 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.938 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:54.938 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid b8aa9432-d384-4354-98be-2d5e1a66b801 -l 0 --dhchap-secret DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: --dhchap-ctrl-secret DHHC-1:03:N2MxMzVkODRlMDg2OGUwMWQ1YjJhZGUxMTIyMWIxMTQ0ZjJmYmNmMGEwYjkxMDdhMDQ4ZjZjZjk0YjFmMmJkOHR30F0=: 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.505 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:55.768 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:56.362 request: 00:13:56.362 { 00:13:56.362 "name": "nvme0", 00:13:56.362 "trtype": "tcp", 00:13:56.362 "traddr": "10.0.0.3", 00:13:56.362 "adrfam": "ipv4", 00:13:56.362 "trsvcid": "4420", 00:13:56.362 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:56.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801", 00:13:56.362 "prchk_reftag": false, 00:13:56.362 "prchk_guard": false, 00:13:56.362 "hdgst": false, 00:13:56.362 "ddgst": false, 00:13:56.362 "dhchap_key": "key1", 00:13:56.362 "allow_unrecognized_csi": false, 00:13:56.362 "method": "bdev_nvme_attach_controller", 00:13:56.362 "req_id": 1 00:13:56.362 } 00:13:56.362 Got JSON-RPC error response 00:13:56.362 response: 00:13:56.362 { 00:13:56.362 "code": -5, 00:13:56.362 "message": "Input/output error" 00:13:56.362 } 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:56.362 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.298 nvme0n1 00:13:57.298 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:57.298 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:57.298 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.557 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.557 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.557 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:57.815 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:58.074 nvme0n1 00:13:58.074 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:58.074 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.074 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:58.333 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.333 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.333 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: '' 2s 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: ]] 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzFjOTU3ODNlNDBmNDc1ZWRhZTBiMjlkYTc3ODQyNGXhDL7E: 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:58.592 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: 2s 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: ]] 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTU5Yzc1OTgwOTFhNjg5ZTU0MmZiZTdhYjMzZWZiNTk5NjRiYWVhY2UzMmViNWY2E5AAAQ==: 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:01.127 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.032 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.601 nvme0n1 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.601 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.537 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:04.537 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.537 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:04.537 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:04.794 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:04.794 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.794 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.362 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.621 request: 00:14:05.621 { 00:14:05.621 "name": "nvme0", 00:14:05.621 "dhchap_key": "key1", 00:14:05.621 "dhchap_ctrlr_key": "key3", 00:14:05.621 "method": "bdev_nvme_set_keys", 00:14:05.621 "req_id": 1 00:14:05.621 } 00:14:05.621 Got JSON-RPC error response 00:14:05.621 response: 00:14:05.621 { 00:14:05.621 "code": -13, 00:14:05.621 "message": "Permission denied" 00:14:05.621 } 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.621 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:05.880 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:05.880 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:07.258 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.213 nvme0n1 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:08.213 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:08.778 request: 00:14:08.778 { 00:14:08.778 "name": "nvme0", 00:14:08.778 "dhchap_key": "key2", 00:14:08.778 "dhchap_ctrlr_key": "key0", 00:14:08.778 "method": "bdev_nvme_set_keys", 00:14:08.778 "req_id": 1 00:14:08.778 } 00:14:08.778 Got JSON-RPC error response 00:14:08.778 response: 00:14:08.778 { 00:14:08.778 "code": -13, 00:14:08.778 "message": "Permission denied" 00:14:08.778 } 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:08.778 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.035 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:09.035 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:09.968 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:09.968 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:09.968 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67569 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67569 ']' 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67569 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67569 00:14:10.225 killing process with pid 67569 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67569' 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67569 00:14:10.225 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67569 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.791 rmmod nvme_tcp 00:14:10.791 rmmod nvme_fabrics 00:14:10.791 rmmod nvme_keyring 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70516 ']' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70516 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70516 ']' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70516 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70516 00:14:10.791 killing process with pid 70516 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70516' 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70516 00:14:10.791 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70516 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:11.049 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fNK /tmp/spdk.key-sha256.F5I /tmp/spdk.key-sha384.AM5 /tmp/spdk.key-sha512.weA /tmp/spdk.key-sha512.EW6 /tmp/spdk.key-sha384.F9z /tmp/spdk.key-sha256.bNq '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:11.309 00:14:11.309 real 3m0.273s 00:14:11.309 user 7m10.259s 00:14:11.309 sys 0m29.505s 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.309 ************************************ 00:14:11.309 END TEST nvmf_auth_target 00:14:11.309 ************************************ 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.309 14:52:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.569 ************************************ 00:14:11.569 START TEST nvmf_bdevio_no_huge 00:14:11.569 ************************************ 00:14:11.569 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:11.569 * Looking for test storage... 00:14:11.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:11.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.569 --rc genhtml_branch_coverage=1 00:14:11.569 --rc genhtml_function_coverage=1 00:14:11.569 --rc genhtml_legend=1 00:14:11.569 --rc geninfo_all_blocks=1 00:14:11.569 --rc geninfo_unexecuted_blocks=1 00:14:11.569 00:14:11.569 ' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:11.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.569 --rc genhtml_branch_coverage=1 00:14:11.569 --rc genhtml_function_coverage=1 00:14:11.569 --rc genhtml_legend=1 00:14:11.569 --rc geninfo_all_blocks=1 00:14:11.569 --rc geninfo_unexecuted_blocks=1 00:14:11.569 00:14:11.569 ' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:11.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.569 --rc genhtml_branch_coverage=1 00:14:11.569 --rc genhtml_function_coverage=1 00:14:11.569 --rc genhtml_legend=1 00:14:11.569 --rc geninfo_all_blocks=1 00:14:11.569 --rc geninfo_unexecuted_blocks=1 00:14:11.569 00:14:11.569 ' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:11.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.569 --rc genhtml_branch_coverage=1 00:14:11.569 --rc genhtml_function_coverage=1 00:14:11.569 --rc genhtml_legend=1 00:14:11.569 --rc geninfo_all_blocks=1 00:14:11.569 --rc geninfo_unexecuted_blocks=1 00:14:11.569 00:14:11.569 ' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.569 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:11.570 Cannot find device "nvmf_init_br" 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:11.570 Cannot find device "nvmf_init_br2" 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:11.570 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:11.859 Cannot find device "nvmf_tgt_br" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.859 Cannot find device "nvmf_tgt_br2" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:11.859 Cannot find device "nvmf_init_br" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:11.859 Cannot find device "nvmf_init_br2" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:11.859 Cannot find device "nvmf_tgt_br" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:11.859 Cannot find device "nvmf_tgt_br2" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:11.859 Cannot find device "nvmf_br" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:11.859 Cannot find device "nvmf_init_if" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:11.859 Cannot find device "nvmf_init_if2" 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:11.859 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:12.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:14:12.127 00:14:12.127 --- 10.0.0.3 ping statistics --- 00:14:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.127 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:12.127 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:12.127 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:14:12.127 00:14:12.127 --- 10.0.0.4 ping statistics --- 00:14:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.127 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:12.127 00:14:12.127 --- 10.0.0.1 ping statistics --- 00:14:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.127 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:12.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:12.127 00:14:12.127 --- 10.0.0.2 ping statistics --- 00:14:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.127 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71152 00:14:12.127 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71152 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71152 ']' 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.128 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.128 [2024-11-22 14:52:26.731663] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:12.128 [2024-11-22 14:52:26.731828] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:12.387 [2024-11-22 14:52:26.903216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.387 [2024-11-22 14:52:26.986089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.387 [2024-11-22 14:52:26.986147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.387 [2024-11-22 14:52:26.986161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.387 [2024-11-22 14:52:26.986172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.387 [2024-11-22 14:52:26.986181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.387 [2024-11-22 14:52:26.987119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.387 [2024-11-22 14:52:26.987232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:12.387 [2024-11-22 14:52:26.987411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.387 [2024-11-22 14:52:26.987425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:12.387 [2024-11-22 14:52:26.993777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.324 [2024-11-22 14:52:27.831639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.324 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.324 Malloc0 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.325 [2024-11-22 14:52:27.873055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:13.325 { 00:14:13.325 "params": { 00:14:13.325 "name": "Nvme$subsystem", 00:14:13.325 "trtype": "$TEST_TRANSPORT", 00:14:13.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.325 "adrfam": "ipv4", 00:14:13.325 "trsvcid": "$NVMF_PORT", 00:14:13.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.325 "hdgst": ${hdgst:-false}, 00:14:13.325 "ddgst": ${ddgst:-false} 00:14:13.325 }, 00:14:13.325 "method": "bdev_nvme_attach_controller" 00:14:13.325 } 00:14:13.325 EOF 00:14:13.325 )") 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:13.325 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:13.325 "params": { 00:14:13.325 "name": "Nvme1", 00:14:13.325 "trtype": "tcp", 00:14:13.325 "traddr": "10.0.0.3", 00:14:13.325 "adrfam": "ipv4", 00:14:13.325 "trsvcid": "4420", 00:14:13.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.325 "hdgst": false, 00:14:13.325 "ddgst": false 00:14:13.325 }, 00:14:13.325 "method": "bdev_nvme_attach_controller" 00:14:13.325 }' 00:14:13.325 [2024-11-22 14:52:27.936259] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:13.325 [2024-11-22 14:52:27.936365] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71188 ] 00:14:13.584 [2024-11-22 14:52:28.103480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:13.584 [2024-11-22 14:52:28.195038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.584 [2024-11-22 14:52:28.195176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.584 [2024-11-22 14:52:28.195180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.584 [2024-11-22 14:52:28.209889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.844 I/O targets: 00:14:13.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:13.844 00:14:13.844 00:14:13.844 CUnit - A unit testing framework for C - Version 2.1-3 00:14:13.844 http://cunit.sourceforge.net/ 00:14:13.844 00:14:13.844 00:14:13.844 Suite: bdevio tests on: Nvme1n1 00:14:13.844 Test: blockdev write read block ...passed 00:14:13.844 Test: blockdev write zeroes read block ...passed 00:14:13.844 Test: blockdev write zeroes read no split ...passed 00:14:13.844 Test: blockdev write zeroes read split ...passed 00:14:13.844 Test: blockdev write zeroes read split partial ...passed 00:14:13.844 Test: blockdev reset ...[2024-11-22 14:52:28.477433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:13.844 [2024-11-22 14:52:28.477543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141f670 (9): Bad file descriptor 00:14:13.844 passed 00:14:13.844 Test: blockdev write read 8 blocks ...[2024-11-22 14:52:28.493172] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:13.844 passed 00:14:13.844 Test: blockdev write read size > 128k ...passed 00:14:13.844 Test: blockdev write read invalid size ...passed 00:14:13.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:13.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:13.844 Test: blockdev write read max offset ...passed 00:14:13.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:13.844 Test: blockdev writev readv 8 blocks ...passed 00:14:13.844 Test: blockdev writev readv 30 x 1block ...passed 00:14:13.844 Test: blockdev writev readv block ...passed 00:14:13.844 Test: blockdev writev readv size > 128k ...passed 00:14:13.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:13.844 Test: blockdev comparev and writev ...[2024-11-22 14:52:28.503468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.503505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.503524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.503535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:13.844 passed 00:14:13.844 Test: blockdev nvme passthru rw ...[2024-11-22 14:52:28.504034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.504055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.504070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.504090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.504517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.504534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.504550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.504559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.504971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.504986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:13.844 [2024-11-22 14:52:28.505001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.844 [2024-11-22 14:52:28.505011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:13.844 passed 00:14:14.103 Test: blockdev nvme passthru vendor specific ...[2024-11-22 14:52:28.505990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.103 [2024-11-22 14:52:28.506011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:14.104 [2024-11-22 14:52:28.506146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.104 [2024-11-22 14:52:28.506161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:14.104 [2024-11-22 14:52:28.506273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.104 [2024-11-22 14:52:28.506302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:14.104 [2024-11-22 14:52:28.506473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.104 [2024-11-22 14:52:28.506493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:14.104 passed 00:14:14.104 Test: blockdev nvme admin passthru ...passed 00:14:14.104 Test: blockdev copy ...passed 00:14:14.104 00:14:14.104 Run Summary: Type Total Ran Passed Failed Inactive 00:14:14.104 suites 1 1 n/a 0 0 00:14:14.104 tests 23 23 23 0 0 00:14:14.104 asserts 152 152 152 0 n/a 00:14:14.104 00:14:14.104 Elapsed time = 0.165 seconds 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.363 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.363 rmmod nvme_tcp 00:14:14.363 rmmod nvme_fabrics 00:14:14.363 rmmod nvme_keyring 00:14:14.363 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71152 ']' 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71152 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71152 ']' 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71152 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71152 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:14.622 killing process with pid 71152 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71152' 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71152 00:14:14.622 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71152 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:14.881 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:15.139 00:14:15.139 real 0m3.744s 00:14:15.139 user 0m11.359s 00:14:15.139 sys 0m1.523s 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.139 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.139 ************************************ 00:14:15.140 END TEST nvmf_bdevio_no_huge 00:14:15.140 ************************************ 00:14:15.140 14:52:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:15.140 14:52:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.140 14:52:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.140 14:52:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.140 ************************************ 00:14:15.140 START TEST nvmf_tls 00:14:15.140 ************************************ 00:14:15.140 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:15.400 * Looking for test storage... 00:14:15.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.400 --rc genhtml_branch_coverage=1 00:14:15.400 --rc genhtml_function_coverage=1 00:14:15.400 --rc genhtml_legend=1 00:14:15.400 --rc geninfo_all_blocks=1 00:14:15.400 --rc geninfo_unexecuted_blocks=1 00:14:15.400 00:14:15.400 ' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.400 --rc genhtml_branch_coverage=1 00:14:15.400 --rc genhtml_function_coverage=1 00:14:15.400 --rc genhtml_legend=1 00:14:15.400 --rc geninfo_all_blocks=1 00:14:15.400 --rc geninfo_unexecuted_blocks=1 00:14:15.400 00:14:15.400 ' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.400 --rc genhtml_branch_coverage=1 00:14:15.400 --rc genhtml_function_coverage=1 00:14:15.400 --rc genhtml_legend=1 00:14:15.400 --rc geninfo_all_blocks=1 00:14:15.400 --rc geninfo_unexecuted_blocks=1 00:14:15.400 00:14:15.400 ' 00:14:15.400 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.400 --rc genhtml_branch_coverage=1 00:14:15.400 --rc genhtml_function_coverage=1 00:14:15.400 --rc genhtml_legend=1 00:14:15.400 --rc geninfo_all_blocks=1 00:14:15.400 --rc geninfo_unexecuted_blocks=1 00:14:15.400 00:14:15.401 ' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.401 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.402 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.402 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.402 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.402 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:15.402 Cannot find device "nvmf_init_br" 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:15.402 Cannot find device "nvmf_init_br2" 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:15.402 Cannot find device "nvmf_tgt_br" 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.402 Cannot find device "nvmf_tgt_br2" 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:15.402 Cannot find device "nvmf_init_br" 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:15.402 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:15.661 Cannot find device "nvmf_init_br2" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:15.661 Cannot find device "nvmf_tgt_br" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:15.661 Cannot find device "nvmf_tgt_br2" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:15.661 Cannot find device "nvmf_br" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:15.661 Cannot find device "nvmf_init_if" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:15.661 Cannot find device "nvmf_init_if2" 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.661 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:15.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:14:15.920 00:14:15.920 --- 10.0.0.3 ping statistics --- 00:14:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.920 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:15.920 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:15.920 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:14:15.920 00:14:15.920 --- 10.0.0.4 ping statistics --- 00:14:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.920 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:15.920 00:14:15.920 --- 10.0.0.1 ping statistics --- 00:14:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.920 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:15.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:15.920 00:14:15.920 --- 10.0.0.2 ping statistics --- 00:14:15.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.920 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.920 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71426 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71426 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71426 ']' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.921 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.921 [2024-11-22 14:52:30.491913] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:15.921 [2024-11-22 14:52:30.491995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.179 [2024-11-22 14:52:30.646920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.179 [2024-11-22 14:52:30.708594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.179 [2024-11-22 14:52:30.708670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.179 [2024-11-22 14:52:30.708685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.179 [2024-11-22 14:52:30.708696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.179 [2024-11-22 14:52:30.708705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.179 [2024-11-22 14:52:30.709229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:16.179 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:16.437 true 00:14:16.437 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.437 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:16.696 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:16.696 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:16.696 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:16.955 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.955 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:17.214 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:17.214 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:17.214 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:17.473 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.473 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:17.731 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:17.731 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:17.731 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.731 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:17.991 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:17.991 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:17.991 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:18.249 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.249 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:18.508 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:18.508 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:18.508 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:18.766 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:18.767 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:18.767 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:18.767 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.mPftIuV1Fx 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.DLZOFJiPlx 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mPftIuV1Fx 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.DLZOFJiPlx 00:14:19.025 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:19.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:19.544 [2024-11-22 14:52:34.078995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.544 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.mPftIuV1Fx 00:14:19.544 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mPftIuV1Fx 00:14:19.544 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.803 [2024-11-22 14:52:34.422385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.803 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:20.062 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:20.321 [2024-11-22 14:52:34.934536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:20.321 [2024-11-22 14:52:34.934751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.321 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:20.580 malloc0 00:14:20.580 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.838 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mPftIuV1Fx 00:14:21.098 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:21.357 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.mPftIuV1Fx 00:14:33.578 Initializing NVMe Controllers 00:14:33.578 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.578 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.578 Initialization complete. Launching workers. 00:14:33.578 ======================================================== 00:14:33.578 Latency(us) 00:14:33.578 Device Information : IOPS MiB/s Average min max 00:14:33.578 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11572.21 45.20 5531.38 1420.91 7171.42 00:14:33.578 ======================================================== 00:14:33.578 Total : 11572.21 45.20 5531.38 1420.91 7171.42 00:14:33.578 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mPftIuV1Fx 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mPftIuV1Fx 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71653 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71653 /var/tmp/bdevperf.sock 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71653 ']' 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.578 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.578 [2024-11-22 14:52:46.116168] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:33.578 [2024-11-22 14:52:46.116296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71653 ] 00:14:33.578 [2024-11-22 14:52:46.269057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.578 [2024-11-22 14:52:46.335976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.578 [2024-11-22 14:52:46.411947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.578 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.578 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:33.578 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mPftIuV1Fx 00:14:33.578 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:33.578 [2024-11-22 14:52:47.539777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.578 TLSTESTn1 00:14:33.579 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:33.579 Running I/O for 10 seconds... 00:14:35.140 4352.00 IOPS, 17.00 MiB/s [2024-11-22T14:52:51.183Z] 4352.00 IOPS, 17.00 MiB/s [2024-11-22T14:52:52.118Z] 4480.00 IOPS, 17.50 MiB/s [2024-11-22T14:52:53.053Z] 4584.00 IOPS, 17.91 MiB/s [2024-11-22T14:52:53.989Z] 4635.60 IOPS, 18.11 MiB/s [2024-11-22T14:52:54.924Z] 4693.17 IOPS, 18.33 MiB/s [2024-11-22T14:52:55.859Z] 4729.43 IOPS, 18.47 MiB/s [2024-11-22T14:52:56.794Z] 4751.88 IOPS, 18.56 MiB/s [2024-11-22T14:52:58.171Z] 4779.44 IOPS, 18.67 MiB/s [2024-11-22T14:52:58.171Z] 4805.70 IOPS, 18.77 MiB/s 00:14:43.506 Latency(us) 00:14:43.506 [2024-11-22T14:52:58.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.506 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:43.506 Verification LBA range: start 0x0 length 0x2000 00:14:43.506 TLSTESTn1 : 10.01 4811.93 18.80 0.00 0.00 26557.06 4766.25 20852.36 00:14:43.506 [2024-11-22T14:52:58.171Z] =================================================================================================================== 00:14:43.506 [2024-11-22T14:52:58.171Z] Total : 4811.93 18.80 0.00 0.00 26557.06 4766.25 20852.36 00:14:43.506 { 00:14:43.506 "results": [ 00:14:43.506 { 00:14:43.506 "job": "TLSTESTn1", 00:14:43.506 "core_mask": "0x4", 00:14:43.506 "workload": "verify", 00:14:43.506 "status": "finished", 00:14:43.507 "verify_range": { 00:14:43.507 "start": 0, 00:14:43.507 "length": 8192 00:14:43.507 }, 00:14:43.507 "queue_depth": 128, 00:14:43.507 "io_size": 4096, 00:14:43.507 "runtime": 10.013656, 00:14:43.507 "iops": 4811.928829989766, 00:14:43.507 "mibps": 18.796596992147524, 00:14:43.507 "io_failed": 0, 00:14:43.507 "io_timeout": 0, 00:14:43.507 "avg_latency_us": 26557.059551161718, 00:14:43.507 "min_latency_us": 4766.254545454545, 00:14:43.507 "max_latency_us": 20852.363636363636 00:14:43.507 } 00:14:43.507 ], 00:14:43.507 "core_count": 1 00:14:43.507 } 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71653 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71653 ']' 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71653 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71653 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:43.507 killing process with pid 71653 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71653' 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71653 00:14:43.507 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.507 00:14:43.507 Latency(us) 00:14:43.507 [2024-11-22T14:52:58.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.507 [2024-11-22T14:52:58.172Z] =================================================================================================================== 00:14:43.507 [2024-11-22T14:52:58.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.507 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71653 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DLZOFJiPlx 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DLZOFJiPlx 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DLZOFJiPlx 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DLZOFJiPlx 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71787 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71787 /var/tmp/bdevperf.sock 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71787 ']' 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.507 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.507 [2024-11-22 14:52:58.137190] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:43.507 [2024-11-22 14:52:58.137296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71787 ] 00:14:43.766 [2024-11-22 14:52:58.280882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.766 [2024-11-22 14:52:58.322069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.766 [2024-11-22 14:52:58.391917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.719 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.719 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:44.719 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DLZOFJiPlx 00:14:44.719 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:44.989 [2024-11-22 14:52:59.555120] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.989 [2024-11-22 14:52:59.560038] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:44.989 [2024-11-22 14:52:59.560672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2157fd0 (107): Transport endpoint is not connected 00:14:44.989 [2024-11-22 14:52:59.561662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2157fd0 (9): Bad file descriptor 00:14:44.989 [2024-11-22 14:52:59.562659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:44.989 [2024-11-22 14:52:59.562679] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:44.989 [2024-11-22 14:52:59.562698] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:44.989 [2024-11-22 14:52:59.562712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:44.989 request: 00:14:44.989 { 00:14:44.989 "name": "TLSTEST", 00:14:44.989 "trtype": "tcp", 00:14:44.989 "traddr": "10.0.0.3", 00:14:44.989 "adrfam": "ipv4", 00:14:44.989 "trsvcid": "4420", 00:14:44.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.989 "prchk_reftag": false, 00:14:44.989 "prchk_guard": false, 00:14:44.989 "hdgst": false, 00:14:44.989 "ddgst": false, 00:14:44.989 "psk": "key0", 00:14:44.989 "allow_unrecognized_csi": false, 00:14:44.989 "method": "bdev_nvme_attach_controller", 00:14:44.989 "req_id": 1 00:14:44.989 } 00:14:44.990 Got JSON-RPC error response 00:14:44.990 response: 00:14:44.990 { 00:14:44.990 "code": -5, 00:14:44.990 "message": "Input/output error" 00:14:44.990 } 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71787 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71787 ']' 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71787 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71787 00:14:44.990 killing process with pid 71787 00:14:44.990 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.990 00:14:44.990 Latency(us) 00:14:44.990 [2024-11-22T14:52:59.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.990 [2024-11-22T14:52:59.655Z] =================================================================================================================== 00:14:44.990 [2024-11-22T14:52:59.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71787' 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71787 00:14:44.990 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71787 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mPftIuV1Fx 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mPftIuV1Fx 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.mPftIuV1Fx 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mPftIuV1Fx 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71820 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71820 /var/tmp/bdevperf.sock 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71820 ']' 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.249 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.507 [2024-11-22 14:52:59.920822] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:45.507 [2024-11-22 14:52:59.920949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71820 ] 00:14:45.507 [2024-11-22 14:53:00.066173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.507 [2024-11-22 14:53:00.118199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.766 [2024-11-22 14:53:00.189525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.334 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.334 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.334 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mPftIuV1Fx 00:14:46.593 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:46.853 [2024-11-22 14:53:01.363533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.853 [2024-11-22 14:53:01.368840] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:46.853 [2024-11-22 14:53:01.368878] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:46.853 [2024-11-22 14:53:01.368926] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:46.853 [2024-11-22 14:53:01.369093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697fd0 (107): Transport endpoint is not connected 00:14:46.853 [2024-11-22 14:53:01.370083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697fd0 (9): Bad file descriptor 00:14:46.853 [2024-11-22 14:53:01.371080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:46.853 [2024-11-22 14:53:01.371099] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:46.853 [2024-11-22 14:53:01.371108] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:46.853 [2024-11-22 14:53:01.371123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:46.853 request: 00:14:46.853 { 00:14:46.853 "name": "TLSTEST", 00:14:46.853 "trtype": "tcp", 00:14:46.853 "traddr": "10.0.0.3", 00:14:46.853 "adrfam": "ipv4", 00:14:46.853 "trsvcid": "4420", 00:14:46.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.853 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:46.853 "prchk_reftag": false, 00:14:46.853 "prchk_guard": false, 00:14:46.853 "hdgst": false, 00:14:46.853 "ddgst": false, 00:14:46.853 "psk": "key0", 00:14:46.853 "allow_unrecognized_csi": false, 00:14:46.853 "method": "bdev_nvme_attach_controller", 00:14:46.853 "req_id": 1 00:14:46.853 } 00:14:46.853 Got JSON-RPC error response 00:14:46.853 response: 00:14:46.853 { 00:14:46.853 "code": -5, 00:14:46.853 "message": "Input/output error" 00:14:46.853 } 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71820 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71820 ']' 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71820 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71820 00:14:46.853 killing process with pid 71820 00:14:46.853 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.853 00:14:46.853 Latency(us) 00:14:46.853 [2024-11-22T14:53:01.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.853 [2024-11-22T14:53:01.518Z] =================================================================================================================== 00:14:46.853 [2024-11-22T14:53:01.518Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71820' 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71820 00:14:46.853 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71820 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.112 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mPftIuV1Fx 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mPftIuV1Fx 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.mPftIuV1Fx 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mPftIuV1Fx 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71851 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71851 /var/tmp/bdevperf.sock 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71851 ']' 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.113 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.113 [2024-11-22 14:53:01.712826] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:47.113 [2024-11-22 14:53:01.712945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71851 ] 00:14:47.372 [2024-11-22 14:53:01.854800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.372 [2024-11-22 14:53:01.896814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.372 [2024-11-22 14:53:01.966855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.309 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.309 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:48.309 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mPftIuV1Fx 00:14:48.309 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:48.569 [2024-11-22 14:53:03.069578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.569 [2024-11-22 14:53:03.078938] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:48.569 [2024-11-22 14:53:03.078974] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:48.569 [2024-11-22 14:53:03.079019] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:48.569 [2024-11-22 14:53:03.079092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f2fd0 (107): Transport endpoint is not connected 00:14:48.569 [2024-11-22 14:53:03.080084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f2fd0 (9): Bad file descriptor 00:14:48.569 [2024-11-22 14:53:03.081081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:48.569 [2024-11-22 14:53:03.081101] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:48.569 [2024-11-22 14:53:03.081110] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:48.569 [2024-11-22 14:53:03.081125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:48.569 request: 00:14:48.569 { 00:14:48.569 "name": "TLSTEST", 00:14:48.569 "trtype": "tcp", 00:14:48.569 "traddr": "10.0.0.3", 00:14:48.569 "adrfam": "ipv4", 00:14:48.569 "trsvcid": "4420", 00:14:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:48.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.569 "prchk_reftag": false, 00:14:48.569 "prchk_guard": false, 00:14:48.569 "hdgst": false, 00:14:48.569 "ddgst": false, 00:14:48.569 "psk": "key0", 00:14:48.569 "allow_unrecognized_csi": false, 00:14:48.569 "method": "bdev_nvme_attach_controller", 00:14:48.569 "req_id": 1 00:14:48.569 } 00:14:48.569 Got JSON-RPC error response 00:14:48.569 response: 00:14:48.569 { 00:14:48.569 "code": -5, 00:14:48.569 "message": "Input/output error" 00:14:48.569 } 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71851 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71851 ']' 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71851 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71851 00:14:48.569 killing process with pid 71851 00:14:48.569 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.569 00:14:48.569 Latency(us) 00:14:48.569 [2024-11-22T14:53:03.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.569 [2024-11-22T14:53:03.234Z] =================================================================================================================== 00:14:48.569 [2024-11-22T14:53:03.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71851' 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71851 00:14:48.569 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71851 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71885 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71885 /var/tmp/bdevperf.sock 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71885 ']' 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.829 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.829 [2024-11-22 14:53:03.435515] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:48.829 [2024-11-22 14:53:03.435635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71885 ] 00:14:49.089 [2024-11-22 14:53:03.584280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.089 [2024-11-22 14:53:03.628595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.089 [2024-11-22 14:53:03.698272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.348 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.348 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.348 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:49.348 [2024-11-22 14:53:03.965153] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:49.348 [2024-11-22 14:53:03.965210] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:49.348 request: 00:14:49.348 { 00:14:49.348 "name": "key0", 00:14:49.348 "path": "", 00:14:49.348 "method": "keyring_file_add_key", 00:14:49.348 "req_id": 1 00:14:49.348 } 00:14:49.348 Got JSON-RPC error response 00:14:49.348 response: 00:14:49.348 { 00:14:49.348 "code": -1, 00:14:49.348 "message": "Operation not permitted" 00:14:49.348 } 00:14:49.348 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.676 [2024-11-22 14:53:04.241311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.676 [2024-11-22 14:53:04.241361] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:49.676 request: 00:14:49.676 { 00:14:49.676 "name": "TLSTEST", 00:14:49.676 "trtype": "tcp", 00:14:49.676 "traddr": "10.0.0.3", 00:14:49.676 "adrfam": "ipv4", 00:14:49.676 "trsvcid": "4420", 00:14:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.676 "prchk_reftag": false, 00:14:49.676 "prchk_guard": false, 00:14:49.676 "hdgst": false, 00:14:49.676 "ddgst": false, 00:14:49.676 "psk": "key0", 00:14:49.676 "allow_unrecognized_csi": false, 00:14:49.676 "method": "bdev_nvme_attach_controller", 00:14:49.676 "req_id": 1 00:14:49.676 } 00:14:49.676 Got JSON-RPC error response 00:14:49.676 response: 00:14:49.676 { 00:14:49.676 "code": -126, 00:14:49.676 "message": "Required key not available" 00:14:49.676 } 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71885 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71885 ']' 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71885 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.676 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71885 00:14:49.962 killing process with pid 71885 00:14:49.962 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.962 00:14:49.962 Latency(us) 00:14:49.962 [2024-11-22T14:53:04.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.962 [2024-11-22T14:53:04.627Z] =================================================================================================================== 00:14:49.962 [2024-11-22T14:53:04.627Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71885' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71885 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71885 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71426 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71426 ']' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71426 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71426 00:14:49.962 killing process with pid 71426 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71426' 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71426 00:14:49.962 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71426 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.21R9er9XYg 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.21R9er9XYg 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71917 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71917 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71917 ']' 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.221 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.480 [2024-11-22 14:53:04.903318] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:50.480 [2024-11-22 14:53:04.903481] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.480 [2024-11-22 14:53:05.034286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.480 [2024-11-22 14:53:05.079789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.480 [2024-11-22 14:53:05.079856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.480 [2024-11-22 14:53:05.079866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.480 [2024-11-22 14:53:05.079873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.480 [2024-11-22 14:53:05.079879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.480 [2024-11-22 14:53:05.080287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.739 [2024-11-22 14:53:05.150533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.21R9er9XYg 00:14:50.739 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:50.998 [2024-11-22 14:53:05.510005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.998 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:51.256 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:51.515 [2024-11-22 14:53:06.066102] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.515 [2024-11-22 14:53:06.066533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:51.515 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:51.773 malloc0 00:14:51.773 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:52.032 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:14:52.290 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21R9er9XYg 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.21R9er9XYg 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71965 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71965 /var/tmp/bdevperf.sock 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71965 ']' 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.549 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.549 [2024-11-22 14:53:07.033697] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:14:52.549 [2024-11-22 14:53:07.034220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71965 ] 00:14:52.549 [2024-11-22 14:53:07.179968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.808 [2024-11-22 14:53:07.240009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.808 [2024-11-22 14:53:07.315903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.808 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.808 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:52.808 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:14:53.066 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:53.326 [2024-11-22 14:53:07.876885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.326 TLSTESTn1 00:14:53.326 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:53.585 Running I/O for 10 seconds... 00:14:55.454 4864.00 IOPS, 19.00 MiB/s [2024-11-22T14:53:11.496Z] 4864.00 IOPS, 19.00 MiB/s [2024-11-22T14:53:12.433Z] 4869.00 IOPS, 19.02 MiB/s [2024-11-22T14:53:13.367Z] 4709.50 IOPS, 18.40 MiB/s [2024-11-22T14:53:14.303Z] 4637.00 IOPS, 18.11 MiB/s [2024-11-22T14:53:15.240Z] 4547.33 IOPS, 17.76 MiB/s [2024-11-22T14:53:16.175Z] 4496.00 IOPS, 17.56 MiB/s [2024-11-22T14:53:17.110Z] 4445.62 IOPS, 17.37 MiB/s [2024-11-22T14:53:18.487Z] 4418.56 IOPS, 17.26 MiB/s [2024-11-22T14:53:18.487Z] 4385.00 IOPS, 17.13 MiB/s 00:15:03.822 Latency(us) 00:15:03.822 [2024-11-22T14:53:18.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.822 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:03.822 Verification LBA range: start 0x0 length 0x2000 00:15:03.822 TLSTESTn1 : 10.02 4387.32 17.14 0.00 0.00 29117.60 6047.19 29908.25 00:15:03.822 [2024-11-22T14:53:18.487Z] =================================================================================================================== 00:15:03.822 [2024-11-22T14:53:18.487Z] Total : 4387.32 17.14 0.00 0.00 29117.60 6047.19 29908.25 00:15:03.822 { 00:15:03.822 "results": [ 00:15:03.822 { 00:15:03.822 "job": "TLSTESTn1", 00:15:03.822 "core_mask": "0x4", 00:15:03.822 "workload": "verify", 00:15:03.822 "status": "finished", 00:15:03.822 "verify_range": { 00:15:03.822 "start": 0, 00:15:03.822 "length": 8192 00:15:03.822 }, 00:15:03.822 "queue_depth": 128, 00:15:03.822 "io_size": 4096, 00:15:03.822 "runtime": 10.023894, 00:15:03.822 "iops": 4387.316944891875, 00:15:03.822 "mibps": 17.137956815983888, 00:15:03.822 "io_failed": 0, 00:15:03.822 "io_timeout": 0, 00:15:03.822 "avg_latency_us": 29117.60254606642, 00:15:03.822 "min_latency_us": 6047.185454545454, 00:15:03.822 "max_latency_us": 29908.247272727273 00:15:03.822 } 00:15:03.822 ], 00:15:03.822 "core_count": 1 00:15:03.822 } 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71965 ']' 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:03.822 killing process with pid 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71965' 00:15:03.822 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.822 00:15:03.822 Latency(us) 00:15:03.822 [2024-11-22T14:53:18.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.822 [2024-11-22T14:53:18.487Z] =================================================================================================================== 00:15:03.822 [2024-11-22T14:53:18.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71965 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.21R9er9XYg 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21R9er9XYg 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21R9er9XYg 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.21R9er9XYg 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.21R9er9XYg 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72093 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72093 /var/tmp/bdevperf.sock 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72093 ']' 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.822 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 [2024-11-22 14:53:18.468158] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:03.822 [2024-11-22 14:53:18.468244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:15:04.081 [2024-11-22 14:53:18.605688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.081 [2024-11-22 14:53:18.663354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.081 [2024-11-22 14:53:18.735262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.340 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.340 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:04.340 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:04.598 [2024-11-22 14:53:19.014250] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.21R9er9XYg': 0100666 00:15:04.598 [2024-11-22 14:53:19.014303] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:04.598 request: 00:15:04.598 { 00:15:04.598 "name": "key0", 00:15:04.598 "path": "/tmp/tmp.21R9er9XYg", 00:15:04.598 "method": "keyring_file_add_key", 00:15:04.598 "req_id": 1 00:15:04.598 } 00:15:04.598 Got JSON-RPC error response 00:15:04.598 response: 00:15:04.598 { 00:15:04.598 "code": -1, 00:15:04.598 "message": "Operation not permitted" 00:15:04.598 } 00:15:04.598 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.856 [2024-11-22 14:53:19.298412] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.856 [2024-11-22 14:53:19.298467] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:04.856 request: 00:15:04.856 { 00:15:04.856 "name": "TLSTEST", 00:15:04.856 "trtype": "tcp", 00:15:04.856 "traddr": "10.0.0.3", 00:15:04.856 "adrfam": "ipv4", 00:15:04.856 "trsvcid": "4420", 00:15:04.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.856 "prchk_reftag": false, 00:15:04.856 "prchk_guard": false, 00:15:04.856 "hdgst": false, 00:15:04.856 "ddgst": false, 00:15:04.856 "psk": "key0", 00:15:04.856 "allow_unrecognized_csi": false, 00:15:04.856 "method": "bdev_nvme_attach_controller", 00:15:04.856 "req_id": 1 00:15:04.856 } 00:15:04.856 Got JSON-RPC error response 00:15:04.856 response: 00:15:04.856 { 00:15:04.856 "code": -126, 00:15:04.856 "message": "Required key not available" 00:15:04.856 } 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72093 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72093 ']' 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72093 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72093 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:04.856 killing process with pid 72093 00:15:04.856 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72093' 00:15:04.856 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.856 00:15:04.856 Latency(us) 00:15:04.856 [2024-11-22T14:53:19.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.856 [2024-11-22T14:53:19.521Z] =================================================================================================================== 00:15:04.856 [2024-11-22T14:53:19.522Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:04.857 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72093 00:15:04.857 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72093 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71917 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71917 ']' 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71917 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71917 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:05.117 killing process with pid 71917 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71917' 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71917 00:15:05.117 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71917 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72125 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72125 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72125 ']' 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.375 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.375 [2024-11-22 14:53:19.943601] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:05.375 [2024-11-22 14:53:19.943681] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.634 [2024-11-22 14:53:20.075847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.634 [2024-11-22 14:53:20.133188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.634 [2024-11-22 14:53:20.133263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.634 [2024-11-22 14:53:20.133275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.634 [2024-11-22 14:53:20.133283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.634 [2024-11-22 14:53:20.133290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.634 [2024-11-22 14:53:20.133749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.634 [2024-11-22 14:53:20.204282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.21R9er9XYg 00:15:06.571 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:06.571 [2024-11-22 14:53:21.188017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.571 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:06.830 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:07.090 [2024-11-22 14:53:21.700161] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.090 [2024-11-22 14:53:21.700486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.090 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:07.349 malloc0 00:15:07.608 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:07.608 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:07.867 [2024-11-22 14:53:22.437737] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.21R9er9XYg': 0100666 00:15:07.867 [2024-11-22 14:53:22.437793] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:07.868 request: 00:15:07.868 { 00:15:07.868 "name": "key0", 00:15:07.868 "path": "/tmp/tmp.21R9er9XYg", 00:15:07.868 "method": "keyring_file_add_key", 00:15:07.868 "req_id": 1 00:15:07.868 } 00:15:07.868 Got JSON-RPC error response 00:15:07.868 response: 00:15:07.868 { 00:15:07.868 "code": -1, 00:15:07.868 "message": "Operation not permitted" 00:15:07.868 } 00:15:07.868 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:08.127 [2024-11-22 14:53:22.713812] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:08.127 [2024-11-22 14:53:22.714201] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:08.127 request: 00:15:08.127 { 00:15:08.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.127 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.127 "psk": "key0", 00:15:08.127 "method": "nvmf_subsystem_add_host", 00:15:08.127 "req_id": 1 00:15:08.127 } 00:15:08.127 Got JSON-RPC error response 00:15:08.127 response: 00:15:08.127 { 00:15:08.127 "code": -32603, 00:15:08.127 "message": "Internal error" 00:15:08.127 } 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72125 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72125 ']' 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72125 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72125 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:08.127 killing process with pid 72125 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72125' 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72125 00:15:08.127 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72125 00:15:08.386 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.21R9er9XYg 00:15:08.386 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:08.386 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.386 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72194 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72194 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72194 ']' 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.387 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.646 [2024-11-22 14:53:23.087271] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:08.646 [2024-11-22 14:53:23.087973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.646 [2024-11-22 14:53:23.225731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.646 [2024-11-22 14:53:23.295714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.646 [2024-11-22 14:53:23.296024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.646 [2024-11-22 14:53:23.296126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.646 [2024-11-22 14:53:23.296231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.646 [2024-11-22 14:53:23.296314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.646 [2024-11-22 14:53:23.296877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.905 [2024-11-22 14:53:23.354576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.21R9er9XYg 00:15:08.905 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:09.164 [2024-11-22 14:53:23.669549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.164 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:09.423 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:09.681 [2024-11-22 14:53:24.261618] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:09.682 [2024-11-22 14:53:24.261840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.682 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:09.940 malloc0 00:15:09.940 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:10.199 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:10.458 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72242 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72242 /var/tmp/bdevperf.sock 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72242 ']' 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.716 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.716 [2024-11-22 14:53:25.340521] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:10.716 [2024-11-22 14:53:25.340614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72242 ] 00:15:10.974 [2024-11-22 14:53:25.488489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.974 [2024-11-22 14:53:25.556106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.974 [2024-11-22 14:53:25.629711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.910 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.910 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:11.910 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:11.910 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:12.170 [2024-11-22 14:53:26.677437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.170 TLSTESTn1 00:15:12.170 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:12.737 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:12.737 "subsystems": [ 00:15:12.737 { 00:15:12.737 "subsystem": "keyring", 00:15:12.737 "config": [ 00:15:12.737 { 00:15:12.737 "method": "keyring_file_add_key", 00:15:12.737 "params": { 00:15:12.737 "name": "key0", 00:15:12.737 "path": "/tmp/tmp.21R9er9XYg" 00:15:12.737 } 00:15:12.737 } 00:15:12.737 ] 00:15:12.737 }, 00:15:12.737 { 00:15:12.737 "subsystem": "iobuf", 00:15:12.737 "config": [ 00:15:12.737 { 00:15:12.737 "method": "iobuf_set_options", 00:15:12.737 "params": { 00:15:12.737 "small_pool_count": 8192, 00:15:12.737 "large_pool_count": 1024, 00:15:12.737 "small_bufsize": 8192, 00:15:12.737 "large_bufsize": 135168, 00:15:12.737 "enable_numa": false 00:15:12.737 } 00:15:12.737 } 00:15:12.737 ] 00:15:12.737 }, 00:15:12.737 { 00:15:12.737 "subsystem": "sock", 00:15:12.737 "config": [ 00:15:12.737 { 00:15:12.737 "method": "sock_set_default_impl", 00:15:12.737 "params": { 00:15:12.737 "impl_name": "uring" 00:15:12.737 } 00:15:12.737 }, 00:15:12.737 { 00:15:12.737 "method": "sock_impl_set_options", 00:15:12.737 "params": { 00:15:12.737 "impl_name": "ssl", 00:15:12.737 "recv_buf_size": 4096, 00:15:12.737 "send_buf_size": 4096, 00:15:12.737 "enable_recv_pipe": true, 00:15:12.737 "enable_quickack": false, 00:15:12.737 "enable_placement_id": 0, 00:15:12.737 "enable_zerocopy_send_server": true, 00:15:12.737 "enable_zerocopy_send_client": false, 00:15:12.737 "zerocopy_threshold": 0, 00:15:12.737 "tls_version": 0, 00:15:12.737 "enable_ktls": false 00:15:12.737 } 00:15:12.737 }, 00:15:12.737 { 00:15:12.737 "method": "sock_impl_set_options", 00:15:12.737 "params": { 00:15:12.737 "impl_name": "posix", 00:15:12.737 "recv_buf_size": 2097152, 00:15:12.737 "send_buf_size": 2097152, 00:15:12.737 "enable_recv_pipe": true, 00:15:12.738 "enable_quickack": false, 00:15:12.738 "enable_placement_id": 0, 00:15:12.738 "enable_zerocopy_send_server": true, 00:15:12.738 "enable_zerocopy_send_client": false, 00:15:12.738 "zerocopy_threshold": 0, 00:15:12.738 "tls_version": 0, 00:15:12.738 "enable_ktls": false 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "sock_impl_set_options", 00:15:12.738 "params": { 00:15:12.738 "impl_name": "uring", 00:15:12.738 "recv_buf_size": 2097152, 00:15:12.738 "send_buf_size": 2097152, 00:15:12.738 "enable_recv_pipe": true, 00:15:12.738 "enable_quickack": false, 00:15:12.738 "enable_placement_id": 0, 00:15:12.738 "enable_zerocopy_send_server": false, 00:15:12.738 "enable_zerocopy_send_client": false, 00:15:12.738 "zerocopy_threshold": 0, 00:15:12.738 "tls_version": 0, 00:15:12.738 "enable_ktls": false 00:15:12.738 } 00:15:12.738 } 00:15:12.738 ] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "vmd", 00:15:12.738 "config": [] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "accel", 00:15:12.738 "config": [ 00:15:12.738 { 00:15:12.738 "method": "accel_set_options", 00:15:12.738 "params": { 00:15:12.738 "small_cache_size": 128, 00:15:12.738 "large_cache_size": 16, 00:15:12.738 "task_count": 2048, 00:15:12.738 "sequence_count": 2048, 00:15:12.738 "buf_count": 2048 00:15:12.738 } 00:15:12.738 } 00:15:12.738 ] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "bdev", 00:15:12.738 "config": [ 00:15:12.738 { 00:15:12.738 "method": "bdev_set_options", 00:15:12.738 "params": { 00:15:12.738 "bdev_io_pool_size": 65535, 00:15:12.738 "bdev_io_cache_size": 256, 00:15:12.738 "bdev_auto_examine": true, 00:15:12.738 "iobuf_small_cache_size": 128, 00:15:12.738 "iobuf_large_cache_size": 16 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_raid_set_options", 00:15:12.738 "params": { 00:15:12.738 "process_window_size_kb": 1024, 00:15:12.738 "process_max_bandwidth_mb_sec": 0 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_iscsi_set_options", 00:15:12.738 "params": { 00:15:12.738 "timeout_sec": 30 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_nvme_set_options", 00:15:12.738 "params": { 00:15:12.738 "action_on_timeout": "none", 00:15:12.738 "timeout_us": 0, 00:15:12.738 "timeout_admin_us": 0, 00:15:12.738 "keep_alive_timeout_ms": 10000, 00:15:12.738 "arbitration_burst": 0, 00:15:12.738 "low_priority_weight": 0, 00:15:12.738 "medium_priority_weight": 0, 00:15:12.738 "high_priority_weight": 0, 00:15:12.738 "nvme_adminq_poll_period_us": 10000, 00:15:12.738 "nvme_ioq_poll_period_us": 0, 00:15:12.738 "io_queue_requests": 0, 00:15:12.738 "delay_cmd_submit": true, 00:15:12.738 "transport_retry_count": 4, 00:15:12.738 "bdev_retry_count": 3, 00:15:12.738 "transport_ack_timeout": 0, 00:15:12.738 "ctrlr_loss_timeout_sec": 0, 00:15:12.738 "reconnect_delay_sec": 0, 00:15:12.738 "fast_io_fail_timeout_sec": 0, 00:15:12.738 "disable_auto_failback": false, 00:15:12.738 "generate_uuids": false, 00:15:12.738 "transport_tos": 0, 00:15:12.738 "nvme_error_stat": false, 00:15:12.738 "rdma_srq_size": 0, 00:15:12.738 "io_path_stat": false, 00:15:12.738 "allow_accel_sequence": false, 00:15:12.738 "rdma_max_cq_size": 0, 00:15:12.738 "rdma_cm_event_timeout_ms": 0, 00:15:12.738 "dhchap_digests": [ 00:15:12.738 "sha256", 00:15:12.738 "sha384", 00:15:12.738 "sha512" 00:15:12.738 ], 00:15:12.738 "dhchap_dhgroups": [ 00:15:12.738 "null", 00:15:12.738 "ffdhe2048", 00:15:12.738 "ffdhe3072", 00:15:12.738 "ffdhe4096", 00:15:12.738 "ffdhe6144", 00:15:12.738 "ffdhe8192" 00:15:12.738 ] 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_nvme_set_hotplug", 00:15:12.738 "params": { 00:15:12.738 "period_us": 100000, 00:15:12.738 "enable": false 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_malloc_create", 00:15:12.738 "params": { 00:15:12.738 "name": "malloc0", 00:15:12.738 "num_blocks": 8192, 00:15:12.738 "block_size": 4096, 00:15:12.738 "physical_block_size": 4096, 00:15:12.738 "uuid": "4bad2938-05bb-454e-a5ea-de25834c8751", 00:15:12.738 "optimal_io_boundary": 0, 00:15:12.738 "md_size": 0, 00:15:12.738 "dif_type": 0, 00:15:12.738 "dif_is_head_of_md": false, 00:15:12.738 "dif_pi_format": 0 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "bdev_wait_for_examine" 00:15:12.738 } 00:15:12.738 ] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "nbd", 00:15:12.738 "config": [] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "scheduler", 00:15:12.738 "config": [ 00:15:12.738 { 00:15:12.738 "method": "framework_set_scheduler", 00:15:12.738 "params": { 00:15:12.738 "name": "static" 00:15:12.738 } 00:15:12.738 } 00:15:12.738 ] 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "subsystem": "nvmf", 00:15:12.738 "config": [ 00:15:12.738 { 00:15:12.738 "method": "nvmf_set_config", 00:15:12.738 "params": { 00:15:12.738 "discovery_filter": "match_any", 00:15:12.738 "admin_cmd_passthru": { 00:15:12.738 "identify_ctrlr": false 00:15:12.738 }, 00:15:12.738 "dhchap_digests": [ 00:15:12.738 "sha256", 00:15:12.738 "sha384", 00:15:12.738 "sha512" 00:15:12.738 ], 00:15:12.738 "dhchap_dhgroups": [ 00:15:12.738 "null", 00:15:12.738 "ffdhe2048", 00:15:12.738 "ffdhe3072", 00:15:12.738 "ffdhe4096", 00:15:12.738 "ffdhe6144", 00:15:12.738 "ffdhe8192" 00:15:12.738 ] 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_set_max_subsystems", 00:15:12.738 "params": { 00:15:12.738 "max_subsystems": 1024 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_set_crdt", 00:15:12.738 "params": { 00:15:12.738 "crdt1": 0, 00:15:12.738 "crdt2": 0, 00:15:12.738 "crdt3": 0 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_create_transport", 00:15:12.738 "params": { 00:15:12.738 "trtype": "TCP", 00:15:12.738 "max_queue_depth": 128, 00:15:12.738 "max_io_qpairs_per_ctrlr": 127, 00:15:12.738 "in_capsule_data_size": 4096, 00:15:12.738 "max_io_size": 131072, 00:15:12.738 "io_unit_size": 131072, 00:15:12.738 "max_aq_depth": 128, 00:15:12.738 "num_shared_buffers": 511, 00:15:12.738 "buf_cache_size": 4294967295, 00:15:12.738 "dif_insert_or_strip": false, 00:15:12.738 "zcopy": false, 00:15:12.738 "c2h_success": false, 00:15:12.738 "sock_priority": 0, 00:15:12.738 "abort_timeout_sec": 1, 00:15:12.738 "ack_timeout": 0, 00:15:12.738 "data_wr_pool_size": 0 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_create_subsystem", 00:15:12.738 "params": { 00:15:12.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.738 "allow_any_host": false, 00:15:12.738 "serial_number": "SPDK00000000000001", 00:15:12.738 "model_number": "SPDK bdev Controller", 00:15:12.738 "max_namespaces": 10, 00:15:12.738 "min_cntlid": 1, 00:15:12.738 "max_cntlid": 65519, 00:15:12.738 "ana_reporting": false 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_subsystem_add_host", 00:15:12.738 "params": { 00:15:12.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.738 "host": "nqn.2016-06.io.spdk:host1", 00:15:12.738 "psk": "key0" 00:15:12.738 } 00:15:12.738 }, 00:15:12.738 { 00:15:12.738 "method": "nvmf_subsystem_add_ns", 00:15:12.738 "params": { 00:15:12.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.738 "namespace": { 00:15:12.738 "nsid": 1, 00:15:12.738 "bdev_name": "malloc0", 00:15:12.739 "nguid": "4BAD293805BB454EA5EADE25834C8751", 00:15:12.739 "uuid": "4bad2938-05bb-454e-a5ea-de25834c8751", 00:15:12.739 "no_auto_visible": false 00:15:12.739 } 00:15:12.739 } 00:15:12.739 }, 00:15:12.739 { 00:15:12.739 "method": "nvmf_subsystem_add_listener", 00:15:12.739 "params": { 00:15:12.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.739 "listen_address": { 00:15:12.739 "trtype": "TCP", 00:15:12.739 "adrfam": "IPv4", 00:15:12.739 "traddr": "10.0.0.3", 00:15:12.739 "trsvcid": "4420" 00:15:12.739 }, 00:15:12.739 "secure_channel": true 00:15:12.739 } 00:15:12.739 } 00:15:12.739 ] 00:15:12.739 } 00:15:12.739 ] 00:15:12.739 }' 00:15:12.739 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:12.998 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:12.998 "subsystems": [ 00:15:12.998 { 00:15:12.998 "subsystem": "keyring", 00:15:12.998 "config": [ 00:15:12.998 { 00:15:12.998 "method": "keyring_file_add_key", 00:15:12.998 "params": { 00:15:12.998 "name": "key0", 00:15:12.998 "path": "/tmp/tmp.21R9er9XYg" 00:15:12.998 } 00:15:12.998 } 00:15:12.998 ] 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "subsystem": "iobuf", 00:15:12.998 "config": [ 00:15:12.998 { 00:15:12.998 "method": "iobuf_set_options", 00:15:12.998 "params": { 00:15:12.998 "small_pool_count": 8192, 00:15:12.998 "large_pool_count": 1024, 00:15:12.998 "small_bufsize": 8192, 00:15:12.998 "large_bufsize": 135168, 00:15:12.998 "enable_numa": false 00:15:12.998 } 00:15:12.998 } 00:15:12.998 ] 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "subsystem": "sock", 00:15:12.998 "config": [ 00:15:12.998 { 00:15:12.998 "method": "sock_set_default_impl", 00:15:12.998 "params": { 00:15:12.998 "impl_name": "uring" 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "sock_impl_set_options", 00:15:12.998 "params": { 00:15:12.998 "impl_name": "ssl", 00:15:12.998 "recv_buf_size": 4096, 00:15:12.998 "send_buf_size": 4096, 00:15:12.998 "enable_recv_pipe": true, 00:15:12.998 "enable_quickack": false, 00:15:12.998 "enable_placement_id": 0, 00:15:12.998 "enable_zerocopy_send_server": true, 00:15:12.998 "enable_zerocopy_send_client": false, 00:15:12.998 "zerocopy_threshold": 0, 00:15:12.998 "tls_version": 0, 00:15:12.998 "enable_ktls": false 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "sock_impl_set_options", 00:15:12.998 "params": { 00:15:12.998 "impl_name": "posix", 00:15:12.998 "recv_buf_size": 2097152, 00:15:12.998 "send_buf_size": 2097152, 00:15:12.998 "enable_recv_pipe": true, 00:15:12.998 "enable_quickack": false, 00:15:12.998 "enable_placement_id": 0, 00:15:12.998 "enable_zerocopy_send_server": true, 00:15:12.998 "enable_zerocopy_send_client": false, 00:15:12.998 "zerocopy_threshold": 0, 00:15:12.998 "tls_version": 0, 00:15:12.998 "enable_ktls": false 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "sock_impl_set_options", 00:15:12.998 "params": { 00:15:12.998 "impl_name": "uring", 00:15:12.998 "recv_buf_size": 2097152, 00:15:12.998 "send_buf_size": 2097152, 00:15:12.998 "enable_recv_pipe": true, 00:15:12.998 "enable_quickack": false, 00:15:12.998 "enable_placement_id": 0, 00:15:12.998 "enable_zerocopy_send_server": false, 00:15:12.998 "enable_zerocopy_send_client": false, 00:15:12.998 "zerocopy_threshold": 0, 00:15:12.998 "tls_version": 0, 00:15:12.998 "enable_ktls": false 00:15:12.998 } 00:15:12.998 } 00:15:12.998 ] 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "subsystem": "vmd", 00:15:12.998 "config": [] 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "subsystem": "accel", 00:15:12.998 "config": [ 00:15:12.998 { 00:15:12.998 "method": "accel_set_options", 00:15:12.998 "params": { 00:15:12.998 "small_cache_size": 128, 00:15:12.998 "large_cache_size": 16, 00:15:12.998 "task_count": 2048, 00:15:12.998 "sequence_count": 2048, 00:15:12.998 "buf_count": 2048 00:15:12.998 } 00:15:12.998 } 00:15:12.998 ] 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "subsystem": "bdev", 00:15:12.998 "config": [ 00:15:12.998 { 00:15:12.998 "method": "bdev_set_options", 00:15:12.998 "params": { 00:15:12.998 "bdev_io_pool_size": 65535, 00:15:12.998 "bdev_io_cache_size": 256, 00:15:12.998 "bdev_auto_examine": true, 00:15:12.998 "iobuf_small_cache_size": 128, 00:15:12.998 "iobuf_large_cache_size": 16 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "bdev_raid_set_options", 00:15:12.998 "params": { 00:15:12.998 "process_window_size_kb": 1024, 00:15:12.998 "process_max_bandwidth_mb_sec": 0 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "bdev_iscsi_set_options", 00:15:12.998 "params": { 00:15:12.998 "timeout_sec": 30 00:15:12.998 } 00:15:12.998 }, 00:15:12.998 { 00:15:12.998 "method": "bdev_nvme_set_options", 00:15:12.998 "params": { 00:15:12.998 "action_on_timeout": "none", 00:15:12.998 "timeout_us": 0, 00:15:12.998 "timeout_admin_us": 0, 00:15:12.998 "keep_alive_timeout_ms": 10000, 00:15:12.998 "arbitration_burst": 0, 00:15:12.999 "low_priority_weight": 0, 00:15:12.999 "medium_priority_weight": 0, 00:15:12.999 "high_priority_weight": 0, 00:15:12.999 "nvme_adminq_poll_period_us": 10000, 00:15:12.999 "nvme_ioq_poll_period_us": 0, 00:15:12.999 "io_queue_requests": 512, 00:15:12.999 "delay_cmd_submit": true, 00:15:12.999 "transport_retry_count": 4, 00:15:12.999 "bdev_retry_count": 3, 00:15:12.999 "transport_ack_timeout": 0, 00:15:12.999 "ctrlr_loss_timeout_sec": 0, 00:15:12.999 "reconnect_delay_sec": 0, 00:15:12.999 "fast_io_fail_timeout_sec": 0, 00:15:12.999 "disable_auto_failback": false, 00:15:12.999 "generate_uuids": false, 00:15:12.999 "transport_tos": 0, 00:15:12.999 "nvme_error_stat": false, 00:15:12.999 "rdma_srq_size": 0, 00:15:12.999 "io_path_stat": false, 00:15:12.999 "allow_accel_sequence": false, 00:15:12.999 "rdma_max_cq_size": 0, 00:15:12.999 "rdma_cm_event_timeout_ms": 0, 00:15:12.999 "dhchap_digests": [ 00:15:12.999 "sha256", 00:15:12.999 "sha384", 00:15:12.999 "sha512" 00:15:12.999 ], 00:15:12.999 "dhchap_dhgroups": [ 00:15:12.999 "null", 00:15:12.999 "ffdhe2048", 00:15:12.999 "ffdhe3072", 00:15:12.999 "ffdhe4096", 00:15:12.999 "ffdhe6144", 00:15:12.999 "ffdhe8192" 00:15:12.999 ] 00:15:12.999 } 00:15:12.999 }, 00:15:12.999 { 00:15:12.999 "method": "bdev_nvme_attach_controller", 00:15:12.999 "params": { 00:15:12.999 "name": "TLSTEST", 00:15:12.999 "trtype": "TCP", 00:15:12.999 "adrfam": "IPv4", 00:15:12.999 "traddr": "10.0.0.3", 00:15:12.999 "trsvcid": "4420", 00:15:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.999 "prchk_reftag": false, 00:15:12.999 "prchk_guard": false, 00:15:12.999 "ctrlr_loss_timeout_sec": 0, 00:15:12.999 "reconnect_delay_sec": 0, 00:15:12.999 "fast_io_fail_timeout_sec": 0, 00:15:12.999 "psk": "key0", 00:15:12.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.999 "hdgst": false, 00:15:12.999 "ddgst": false, 00:15:12.999 "multipath": "multipath" 00:15:12.999 } 00:15:12.999 }, 00:15:12.999 { 00:15:12.999 "method": "bdev_nvme_set_hotplug", 00:15:12.999 "params": { 00:15:12.999 "period_us": 100000, 00:15:12.999 "enable": false 00:15:12.999 } 00:15:12.999 }, 00:15:12.999 { 00:15:12.999 "method": "bdev_wait_for_examine" 00:15:12.999 } 00:15:12.999 ] 00:15:12.999 }, 00:15:12.999 { 00:15:12.999 "subsystem": "nbd", 00:15:12.999 "config": [] 00:15:12.999 } 00:15:12.999 ] 00:15:12.999 }' 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72242 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72242 ']' 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72242 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72242 00:15:12.999 killing process with pid 72242 00:15:12.999 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.999 00:15:12.999 Latency(us) 00:15:12.999 [2024-11-22T14:53:27.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.999 [2024-11-22T14:53:27.664Z] =================================================================================================================== 00:15:12.999 [2024-11-22T14:53:27.664Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72242' 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72242 00:15:12.999 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72242 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72194 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72194 ']' 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72194 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72194 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:13.258 killing process with pid 72194 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72194' 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72194 00:15:13.258 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72194 00:15:13.518 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:13.518 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.518 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.518 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.518 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:13.518 "subsystems": [ 00:15:13.518 { 00:15:13.518 "subsystem": "keyring", 00:15:13.518 "config": [ 00:15:13.518 { 00:15:13.518 "method": "keyring_file_add_key", 00:15:13.518 "params": { 00:15:13.518 "name": "key0", 00:15:13.518 "path": "/tmp/tmp.21R9er9XYg" 00:15:13.518 } 00:15:13.518 } 00:15:13.518 ] 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "subsystem": "iobuf", 00:15:13.518 "config": [ 00:15:13.518 { 00:15:13.518 "method": "iobuf_set_options", 00:15:13.518 "params": { 00:15:13.518 "small_pool_count": 8192, 00:15:13.518 "large_pool_count": 1024, 00:15:13.518 "small_bufsize": 8192, 00:15:13.518 "large_bufsize": 135168, 00:15:13.518 "enable_numa": false 00:15:13.518 } 00:15:13.518 } 00:15:13.518 ] 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "subsystem": "sock", 00:15:13.518 "config": [ 00:15:13.518 { 00:15:13.518 "method": "sock_set_default_impl", 00:15:13.518 "params": { 00:15:13.518 "impl_name": "uring" 00:15:13.518 } 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "method": "sock_impl_set_options", 00:15:13.518 "params": { 00:15:13.518 "impl_name": "ssl", 00:15:13.518 "recv_buf_size": 4096, 00:15:13.518 "send_buf_size": 4096, 00:15:13.518 "enable_recv_pipe": true, 00:15:13.518 "enable_quickack": false, 00:15:13.518 "enable_placement_id": 0, 00:15:13.518 "enable_zerocopy_send_server": true, 00:15:13.518 "enable_zerocopy_send_client": false, 00:15:13.518 "zerocopy_threshold": 0, 00:15:13.518 "tls_version": 0, 00:15:13.518 "enable_ktls": false 00:15:13.518 } 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "method": "sock_impl_set_options", 00:15:13.518 "params": { 00:15:13.518 "impl_name": "posix", 00:15:13.518 "recv_buf_size": 2097152, 00:15:13.518 "send_buf_size": 2097152, 00:15:13.518 "enable_recv_pipe": true, 00:15:13.518 "enable_quickack": false, 00:15:13.518 "enable_placement_id": 0, 00:15:13.518 "enable_zerocopy_send_server": true, 00:15:13.518 "enable_zerocopy_send_client": false, 00:15:13.518 "zerocopy_threshold": 0, 00:15:13.518 "tls_version": 0, 00:15:13.518 "enable_ktls": false 00:15:13.518 } 00:15:13.518 }, 00:15:13.518 { 00:15:13.518 "method": "sock_impl_set_options", 00:15:13.518 "params": { 00:15:13.518 "impl_name": "uring", 00:15:13.518 "recv_buf_size": 2097152, 00:15:13.518 "send_buf_size": 2097152, 00:15:13.518 "enable_recv_pipe": true, 00:15:13.518 "enable_quickack": false, 00:15:13.518 "enable_placement_id": 0, 00:15:13.518 "enable_zerocopy_send_server": false, 00:15:13.518 "enable_zerocopy_send_client": false, 00:15:13.518 "zerocopy_threshold": 0, 00:15:13.518 "tls_version": 0, 00:15:13.518 "enable_ktls": false 00:15:13.518 } 00:15:13.518 } 00:15:13.518 ] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "vmd", 00:15:13.519 "config": [] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "accel", 00:15:13.519 "config": [ 00:15:13.519 { 00:15:13.519 "method": "accel_set_options", 00:15:13.519 "params": { 00:15:13.519 "small_cache_size": 128, 00:15:13.519 "large_cache_size": 16, 00:15:13.519 "task_count": 2048, 00:15:13.519 "sequence_count": 2048, 00:15:13.519 "buf_count": 2048 00:15:13.519 } 00:15:13.519 } 00:15:13.519 ] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "bdev", 00:15:13.519 "config": [ 00:15:13.519 { 00:15:13.519 "method": "bdev_set_options", 00:15:13.519 "params": { 00:15:13.519 "bdev_io_pool_size": 65535, 00:15:13.519 "bdev_io_cache_size": 256, 00:15:13.519 "bdev_auto_examine": true, 00:15:13.519 "iobuf_small_cache_size": 128, 00:15:13.519 "iobuf_large_cache_size": 16 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_raid_set_options", 00:15:13.519 "params": { 00:15:13.519 "process_window_size_kb": 1024, 00:15:13.519 "process_max_bandwidth_mb_sec": 0 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_iscsi_set_options", 00:15:13.519 "params": { 00:15:13.519 "timeout_sec": 30 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_nvme_set_options", 00:15:13.519 "params": { 00:15:13.519 "action_on_timeout": "none", 00:15:13.519 "timeout_us": 0, 00:15:13.519 "timeout_admin_us": 0, 00:15:13.519 "keep_alive_timeout_ms": 10000, 00:15:13.519 "arbitration_burst": 0, 00:15:13.519 "low_priority_weight": 0, 00:15:13.519 "medium_priority_weight": 0, 00:15:13.519 "high_priority_weight": 0, 00:15:13.519 "nvme_adminq_poll_period_us": 10000, 00:15:13.519 "nvme_ioq_poll_period_us": 0, 00:15:13.519 "io_queue_requests": 0, 00:15:13.519 "delay_cmd_submit": true, 00:15:13.519 "transport_retry_count": 4, 00:15:13.519 "bdev_retry_count": 3, 00:15:13.519 "transport_ack_timeout": 0, 00:15:13.519 "ctrlr_loss_timeout_sec": 0, 00:15:13.519 "reconnect_delay_sec": 0, 00:15:13.519 "fast_io_fail_timeout_sec": 0, 00:15:13.519 "disable_auto_failback": false, 00:15:13.519 "generate_uuids": false, 00:15:13.519 "transport_tos": 0, 00:15:13.519 "nvme_error_stat": false, 00:15:13.519 "rdma_srq_size": 0, 00:15:13.519 "io_path_stat": false, 00:15:13.519 "allow_accel_sequence": false, 00:15:13.519 "rdma_max_cq_size": 0, 00:15:13.519 "rdma_cm_event_timeout_ms": 0, 00:15:13.519 "dhchap_digests": [ 00:15:13.519 "sha256", 00:15:13.519 "sha384", 00:15:13.519 "sha512" 00:15:13.519 ], 00:15:13.519 "dhchap_dhgroups": [ 00:15:13.519 "null", 00:15:13.519 "ffdhe2048", 00:15:13.519 "ffdhe3072", 00:15:13.519 "ffdhe4096", 00:15:13.519 "ffdhe6144", 00:15:13.519 "ffdhe8192" 00:15:13.519 ] 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_nvme_set_hotplug", 00:15:13.519 "params": { 00:15:13.519 "period_us": 100000, 00:15:13.519 "enable": false 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_malloc_create", 00:15:13.519 "params": { 00:15:13.519 "name": "malloc0", 00:15:13.519 "num_blocks": 8192, 00:15:13.519 "block_size": 4096, 00:15:13.519 "physical_block_size": 4096, 00:15:13.519 "uuid": "4bad2938-05bb-454e-a5ea-de25834c8751", 00:15:13.519 "optimal_io_boundary": 0, 00:15:13.519 "md_size": 0, 00:15:13.519 "dif_type": 0, 00:15:13.519 "dif_is_head_of_md": false, 00:15:13.519 "dif_pi_format": 0 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "bdev_wait_for_examine" 00:15:13.519 } 00:15:13.519 ] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "nbd", 00:15:13.519 "config": [] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "scheduler", 00:15:13.519 "config": [ 00:15:13.519 { 00:15:13.519 "method": "framework_set_scheduler", 00:15:13.519 "params": { 00:15:13.519 "name": "static" 00:15:13.519 } 00:15:13.519 } 00:15:13.519 ] 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "subsystem": "nvmf", 00:15:13.519 "config": [ 00:15:13.519 { 00:15:13.519 "method": "nvmf_set_config", 00:15:13.519 "params": { 00:15:13.519 "discovery_filter": "match_any", 00:15:13.519 "admin_cmd_passthru": { 00:15:13.519 "identify_ctrlr": false 00:15:13.519 }, 00:15:13.519 "dhchap_digests": [ 00:15:13.519 "sha256", 00:15:13.519 "sha384", 00:15:13.519 "sha512" 00:15:13.519 ], 00:15:13.519 "dhchap_dhgroups": [ 00:15:13.519 "null", 00:15:13.519 "ffdhe2048", 00:15:13.519 "ffdhe3072", 00:15:13.519 "ffdhe4096", 00:15:13.519 "ffdhe6144", 00:15:13.519 "ffdhe8192" 00:15:13.519 ] 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_set_max_subsystems", 00:15:13.519 "params": { 00:15:13.519 "max_subsystems": 1024 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_set_crdt", 00:15:13.519 "params": { 00:15:13.519 "crdt1": 0, 00:15:13.519 "crdt2": 0, 00:15:13.519 "crdt3": 0 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_create_transport", 00:15:13.519 "params": { 00:15:13.519 "trtype": "TCP", 00:15:13.519 "max_queue_depth": 128, 00:15:13.519 "max_io_qpairs_per_ctrlr": 127, 00:15:13.519 "in_capsule_data_size": 4096, 00:15:13.519 "max_io_size": 131072, 00:15:13.519 "io_unit_size": 131072, 00:15:13.519 "max_aq_depth": 128, 00:15:13.519 "num_shared_buffers": 511, 00:15:13.519 "buf_cache_size": 4294967295, 00:15:13.519 "dif_insert_or_strip": false, 00:15:13.519 "zcopy": false, 00:15:13.519 "c2h_success": false, 00:15:13.519 "sock_priority": 0, 00:15:13.519 "abort_timeout_sec": 1, 00:15:13.519 "ack_timeout": 0, 00:15:13.519 "data_wr_pool_size": 0 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_create_subsystem", 00:15:13.519 "params": { 00:15:13.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.519 "allow_any_host": false, 00:15:13.519 "serial_number": "SPDK00000000000001", 00:15:13.519 "model_number": "SPDK bdev Controller", 00:15:13.519 "max_namespaces": 10, 00:15:13.519 "min_cntlid": 1, 00:15:13.519 "max_cntlid": 65519, 00:15:13.519 "ana_reporting": false 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_subsystem_add_host", 00:15:13.519 "params": { 00:15:13.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.519 "host": "nqn.2016-06.io.spdk:host1", 00:15:13.519 "psk": "key0" 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_subsystem_add_ns", 00:15:13.519 "params": { 00:15:13.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.519 "namespace": { 00:15:13.519 "nsid": 1, 00:15:13.519 "bdev_name": "malloc0", 00:15:13.519 "nguid": "4BAD293805BB454EA5EADE25834C8751", 00:15:13.519 "uuid": "4bad2938-05bb-454e-a5ea-de25834c8751", 00:15:13.519 "no_auto_visible": false 00:15:13.519 } 00:15:13.519 } 00:15:13.519 }, 00:15:13.519 { 00:15:13.519 "method": "nvmf_subsystem_add_listener", 00:15:13.519 "params": { 00:15:13.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.519 "listen_address": { 00:15:13.519 "trtype": "TCP", 00:15:13.519 "adrfam": "IPv4", 00:15:13.519 "traddr": "10.0.0.3", 00:15:13.519 "trsvcid": "4420" 00:15:13.519 }, 00:15:13.519 "secure_channel": true 00:15:13.519 } 00:15:13.519 } 00:15:13.519 ] 00:15:13.519 } 00:15:13.519 ] 00:15:13.519 }' 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72292 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72292 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72292 ']' 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.520 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.520 [2024-11-22 14:53:28.071480] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:13.520 [2024-11-22 14:53:28.071578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.780 [2024-11-22 14:53:28.205903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.780 [2024-11-22 14:53:28.253686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.780 [2024-11-22 14:53:28.253760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.780 [2024-11-22 14:53:28.253769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.780 [2024-11-22 14:53:28.253776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.780 [2024-11-22 14:53:28.253783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.780 [2024-11-22 14:53:28.254212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.039 [2024-11-22 14:53:28.443925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.039 [2024-11-22 14:53:28.534977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.039 [2024-11-22 14:53:28.566927] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.039 [2024-11-22 14:53:28.567144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72324 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72324 /var/tmp/bdevperf.sock 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72324 ']' 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.608 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:14.608 "subsystems": [ 00:15:14.608 { 00:15:14.608 "subsystem": "keyring", 00:15:14.608 "config": [ 00:15:14.608 { 00:15:14.608 "method": "keyring_file_add_key", 00:15:14.608 "params": { 00:15:14.608 "name": "key0", 00:15:14.608 "path": "/tmp/tmp.21R9er9XYg" 00:15:14.608 } 00:15:14.608 } 00:15:14.608 ] 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "subsystem": "iobuf", 00:15:14.608 "config": [ 00:15:14.608 { 00:15:14.608 "method": "iobuf_set_options", 00:15:14.608 "params": { 00:15:14.608 "small_pool_count": 8192, 00:15:14.608 "large_pool_count": 1024, 00:15:14.608 "small_bufsize": 8192, 00:15:14.608 "large_bufsize": 135168, 00:15:14.608 "enable_numa": false 00:15:14.608 } 00:15:14.608 } 00:15:14.608 ] 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "subsystem": "sock", 00:15:14.608 "config": [ 00:15:14.608 { 00:15:14.608 "method": "sock_set_default_impl", 00:15:14.608 "params": { 00:15:14.608 "impl_name": "uring" 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "sock_impl_set_options", 00:15:14.608 "params": { 00:15:14.608 "impl_name": "ssl", 00:15:14.608 "recv_buf_size": 4096, 00:15:14.608 "send_buf_size": 4096, 00:15:14.608 "enable_recv_pipe": true, 00:15:14.608 "enable_quickack": false, 00:15:14.608 "enable_placement_id": 0, 00:15:14.608 "enable_zerocopy_send_server": true, 00:15:14.608 "enable_zerocopy_send_client": false, 00:15:14.608 "zerocopy_threshold": 0, 00:15:14.608 "tls_version": 0, 00:15:14.608 "enable_ktls": false 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "sock_impl_set_options", 00:15:14.608 "params": { 00:15:14.608 "impl_name": "posix", 00:15:14.608 "recv_buf_size": 2097152, 00:15:14.608 "send_buf_size": 2097152, 00:15:14.608 "enable_recv_pipe": true, 00:15:14.608 "enable_quickack": false, 00:15:14.608 "enable_placement_id": 0, 00:15:14.608 "enable_zerocopy_send_server": true, 00:15:14.608 "enable_zerocopy_send_client": false, 00:15:14.608 "zerocopy_threshold": 0, 00:15:14.608 "tls_version": 0, 00:15:14.608 "enable_ktls": false 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "sock_impl_set_options", 00:15:14.608 "params": { 00:15:14.608 "impl_name": "uring", 00:15:14.608 "recv_buf_size": 2097152, 00:15:14.608 "send_buf_size": 2097152, 00:15:14.608 "enable_recv_pipe": true, 00:15:14.608 "enable_quickack": false, 00:15:14.608 "enable_placement_id": 0, 00:15:14.608 "enable_zerocopy_send_server": false, 00:15:14.608 "enable_zerocopy_send_client": false, 00:15:14.608 "zerocopy_threshold": 0, 00:15:14.608 "tls_version": 0, 00:15:14.608 "enable_ktls": false 00:15:14.608 } 00:15:14.608 } 00:15:14.608 ] 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "subsystem": "vmd", 00:15:14.608 "config": [] 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "subsystem": "accel", 00:15:14.608 "config": [ 00:15:14.608 { 00:15:14.608 "method": "accel_set_options", 00:15:14.608 "params": { 00:15:14.608 "small_cache_size": 128, 00:15:14.608 "large_cache_size": 16, 00:15:14.608 "task_count": 2048, 00:15:14.608 "sequence_count": 2048, 00:15:14.608 "buf_count": 2048 00:15:14.608 } 00:15:14.608 } 00:15:14.608 ] 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "subsystem": "bdev", 00:15:14.608 "config": [ 00:15:14.608 { 00:15:14.608 "method": "bdev_set_options", 00:15:14.608 "params": { 00:15:14.608 "bdev_io_pool_size": 65535, 00:15:14.608 "bdev_io_cache_size": 256, 00:15:14.608 "bdev_auto_examine": true, 00:15:14.608 "iobuf_small_cache_size": 128, 00:15:14.608 "iobuf_large_cache_size": 16 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "bdev_raid_set_options", 00:15:14.608 "params": { 00:15:14.608 "process_window_size_kb": 1024, 00:15:14.608 "process_max_bandwidth_mb_sec": 0 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "bdev_iscsi_set_options", 00:15:14.608 "params": { 00:15:14.608 "timeout_sec": 30 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "bdev_nvme_set_options", 00:15:14.608 "params": { 00:15:14.608 "action_on_timeout": "none", 00:15:14.608 "timeout_us": 0, 00:15:14.608 "timeout_admin_us": 0, 00:15:14.608 "keep_alive_timeout_ms": 10000, 00:15:14.608 "arbitration_burst": 0, 00:15:14.608 "low_priority_weight": 0, 00:15:14.608 "medium_priority_weight": 0, 00:15:14.608 "high_priority_weight": 0, 00:15:14.608 "nvme_adminq_poll_period_us": 10000, 00:15:14.608 "nvme_ioq_poll_period_us": 0, 00:15:14.608 "io_queue_requests": 512, 00:15:14.608 "delay_cmd_submit": true, 00:15:14.608 "transport_retry_count": 4, 00:15:14.608 "bdev_retry_count": 3, 00:15:14.608 "transport_ack_timeout": 0, 00:15:14.608 "ctrlr_loss_timeout_sec": 0, 00:15:14.608 "reconnect_delay_sec": 0, 00:15:14.608 "fast_io_fail_timeout_sec": 0, 00:15:14.608 "disable_auto_failback": false, 00:15:14.608 "generate_uuids": false, 00:15:14.608 "transport_tos": 0, 00:15:14.608 "nvme_error_stat": false, 00:15:14.608 "rdma_srq_size": 0, 00:15:14.608 "io_path_stat": false, 00:15:14.608 "allow_accel_sequence": false, 00:15:14.608 "rdma_max_cq_size": 0, 00:15:14.608 "rdma_cm_event_timeout_ms": 0, 00:15:14.608 "dhchap_digests": [ 00:15:14.608 "sha256", 00:15:14.608 "sha384", 00:15:14.608 "sha512" 00:15:14.608 ], 00:15:14.608 "dhchap_dhgroups": [ 00:15:14.608 "null", 00:15:14.608 "ffdhe2048", 00:15:14.608 "ffdhe3072", 00:15:14.608 "ffdhe4096", 00:15:14.608 "ffdhe6144", 00:15:14.608 "ffdhe8192" 00:15:14.608 ] 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.608 "method": "bdev_nvme_attach_controller", 00:15:14.608 "params": { 00:15:14.608 "name": "TLSTEST", 00:15:14.608 "trtype": "TCP", 00:15:14.608 "adrfam": "IPv4", 00:15:14.608 "traddr": "10.0.0.3", 00:15:14.608 "trsvcid": "4420", 00:15:14.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.608 "prchk_reftag": false, 00:15:14.608 "prchk_guard": false, 00:15:14.608 "ctrlr_loss_timeout_sec": 0, 00:15:14.608 "reconnect_delay_sec": 0, 00:15:14.608 "fast_io_fail_timeout_sec": 0, 00:15:14.608 "psk": "key0", 00:15:14.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.608 "hdgst": false, 00:15:14.608 "ddgst": false, 00:15:14.608 "multipath": "multipath" 00:15:14.608 } 00:15:14.608 }, 00:15:14.608 { 00:15:14.609 "method": "bdev_nvme_set_hotplug", 00:15:14.609 "params": { 00:15:14.609 "period_us": 100000, 00:15:14.609 "enable": false 00:15:14.609 } 00:15:14.609 }, 00:15:14.609 { 00:15:14.609 "method": "bdev_wait_for_examine" 00:15:14.609 } 00:15:14.609 ] 00:15:14.609 }, 00:15:14.609 { 00:15:14.609 "subsystem": "nbd", 00:15:14.609 "config": [] 00:15:14.609 } 00:15:14.609 ] 00:15:14.609 }' 00:15:14.609 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.609 [2024-11-22 14:53:29.141193] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:14.609 [2024-11-22 14:53:29.141314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72324 ] 00:15:14.868 [2024-11-22 14:53:29.291986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.868 [2024-11-22 14:53:29.349740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.868 [2024-11-22 14:53:29.504018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.128 [2024-11-22 14:53:29.561003] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:15.695 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.695 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:15.695 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:15.695 Running I/O for 10 seconds... 00:15:18.008 4870.00 IOPS, 19.02 MiB/s [2024-11-22T14:53:33.612Z] 5021.50 IOPS, 19.62 MiB/s [2024-11-22T14:53:34.549Z] 4697.33 IOPS, 18.35 MiB/s [2024-11-22T14:53:35.501Z] 4503.25 IOPS, 17.59 MiB/s [2024-11-22T14:53:36.451Z] 4428.20 IOPS, 17.30 MiB/s [2024-11-22T14:53:37.387Z] 4370.83 IOPS, 17.07 MiB/s [2024-11-22T14:53:38.327Z] 4317.57 IOPS, 16.87 MiB/s [2024-11-22T14:53:39.703Z] 4269.88 IOPS, 16.68 MiB/s [2024-11-22T14:53:40.639Z] 4243.00 IOPS, 16.57 MiB/s [2024-11-22T14:53:40.639Z] 4230.30 IOPS, 16.52 MiB/s 00:15:25.974 Latency(us) 00:15:25.974 [2024-11-22T14:53:40.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.974 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.974 Verification LBA range: start 0x0 length 0x2000 00:15:25.974 TLSTESTn1 : 10.02 4233.54 16.54 0.00 0.00 30176.55 7000.44 39083.29 00:15:25.974 [2024-11-22T14:53:40.639Z] =================================================================================================================== 00:15:25.974 [2024-11-22T14:53:40.639Z] Total : 4233.54 16.54 0.00 0.00 30176.55 7000.44 39083.29 00:15:25.974 { 00:15:25.974 "results": [ 00:15:25.974 { 00:15:25.974 "job": "TLSTESTn1", 00:15:25.974 "core_mask": "0x4", 00:15:25.974 "workload": "verify", 00:15:25.974 "status": "finished", 00:15:25.974 "verify_range": { 00:15:25.974 "start": 0, 00:15:25.974 "length": 8192 00:15:25.974 }, 00:15:25.974 "queue_depth": 128, 00:15:25.974 "io_size": 4096, 00:15:25.974 "runtime": 10.022347, 00:15:25.974 "iops": 4233.5393097046035, 00:15:25.974 "mibps": 16.537262928533607, 00:15:25.974 "io_failed": 0, 00:15:25.974 "io_timeout": 0, 00:15:25.974 "avg_latency_us": 30176.55199091552, 00:15:25.974 "min_latency_us": 7000.436363636363, 00:15:25.974 "max_latency_us": 39083.28727272727 00:15:25.974 } 00:15:25.974 ], 00:15:25.974 "core_count": 1 00:15:25.974 } 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72324 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72324 ']' 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72324 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72324 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:25.974 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:25.974 killing process with pid 72324 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72324' 00:15:25.975 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.975 00:15:25.975 Latency(us) 00:15:25.975 [2024-11-22T14:53:40.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.975 [2024-11-22T14:53:40.640Z] =================================================================================================================== 00:15:25.975 [2024-11-22T14:53:40.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72324 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72324 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72292 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72292 ']' 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72292 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.975 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72292 00:15:26.234 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:26.234 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:26.234 killing process with pid 72292 00:15:26.234 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72292' 00:15:26.234 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72292 00:15:26.234 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72292 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72458 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72458 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72458 ']' 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.493 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.493 [2024-11-22 14:53:40.972669] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:26.493 [2024-11-22 14:53:40.972783] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.493 [2024-11-22 14:53:41.124301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.752 [2024-11-22 14:53:41.189254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.752 [2024-11-22 14:53:41.189310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.752 [2024-11-22 14:53:41.189325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.752 [2024-11-22 14:53:41.189335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.752 [2024-11-22 14:53:41.189344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.752 [2024-11-22 14:53:41.189825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.752 [2024-11-22 14:53:41.249026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.318 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.319 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:27.319 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:27.319 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.319 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.578 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.578 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.21R9er9XYg 00:15:27.578 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.21R9er9XYg 00:15:27.578 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:27.836 [2024-11-22 14:53:42.301052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.837 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:28.096 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:28.355 [2024-11-22 14:53:42.901190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.355 [2024-11-22 14:53:42.901513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.355 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:28.614 malloc0 00:15:28.614 14:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:29.182 14:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:29.441 14:53:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72519 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72519 /var/tmp/bdevperf.sock 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72519 ']' 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.700 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.700 [2024-11-22 14:53:44.231826] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:29.700 [2024-11-22 14:53:44.231929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72519 ] 00:15:29.958 [2024-11-22 14:53:44.380526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.959 [2024-11-22 14:53:44.444299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.959 [2024-11-22 14:53:44.527829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.959 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.959 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:29.959 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:30.527 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:30.786 [2024-11-22 14:53:45.228671] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.786 nvme0n1 00:15:30.786 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.045 Running I/O for 1 seconds... 00:15:31.982 3503.00 IOPS, 13.68 MiB/s 00:15:31.982 Latency(us) 00:15:31.982 [2024-11-22T14:53:46.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.982 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.982 Verification LBA range: start 0x0 length 0x2000 00:15:31.982 nvme0n1 : 1.02 3545.87 13.85 0.00 0.00 35563.93 5779.08 26810.18 00:15:31.982 [2024-11-22T14:53:46.647Z] =================================================================================================================== 00:15:31.982 [2024-11-22T14:53:46.647Z] Total : 3545.87 13.85 0.00 0.00 35563.93 5779.08 26810.18 00:15:31.982 { 00:15:31.982 "results": [ 00:15:31.982 { 00:15:31.982 "job": "nvme0n1", 00:15:31.982 "core_mask": "0x2", 00:15:31.982 "workload": "verify", 00:15:31.982 "status": "finished", 00:15:31.982 "verify_range": { 00:15:31.982 "start": 0, 00:15:31.982 "length": 8192 00:15:31.982 }, 00:15:31.982 "queue_depth": 128, 00:15:31.982 "io_size": 4096, 00:15:31.982 "runtime": 1.024291, 00:15:31.982 "iops": 3545.867336528389, 00:15:31.982 "mibps": 13.85104428331402, 00:15:31.982 "io_failed": 0, 00:15:31.982 "io_timeout": 0, 00:15:31.982 "avg_latency_us": 35563.934193031644, 00:15:31.982 "min_latency_us": 5779.083636363636, 00:15:31.982 "max_latency_us": 26810.18181818182 00:15:31.982 } 00:15:31.982 ], 00:15:31.982 "core_count": 1 00:15:31.982 } 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72519 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72519 ']' 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72519 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72519 00:15:31.982 killing process with pid 72519 00:15:31.982 Received shutdown signal, test time was about 1.000000 seconds 00:15:31.982 00:15:31.982 Latency(us) 00:15:31.982 [2024-11-22T14:53:46.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.982 [2024-11-22T14:53:46.647Z] =================================================================================================================== 00:15:31.982 [2024-11-22T14:53:46.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72519' 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72519 00:15:31.982 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72519 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72458 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72458 ']' 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72458 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72458 00:15:32.242 killing process with pid 72458 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72458' 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72458 00:15:32.242 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72458 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72568 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72568 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72568 ']' 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.501 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.760 [2024-11-22 14:53:47.166466] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:32.760 [2024-11-22 14:53:47.166853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.760 [2024-11-22 14:53:47.313366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.760 [2024-11-22 14:53:47.378569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.760 [2024-11-22 14:53:47.378638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.760 [2024-11-22 14:53:47.378667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.760 [2024-11-22 14:53:47.378676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.760 [2024-11-22 14:53:47.378683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.760 [2024-11-22 14:53:47.379098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.019 [2024-11-22 14:53:47.451269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.587 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.587 [2024-11-22 14:53:48.220127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.587 malloc0 00:15:33.846 [2024-11-22 14:53:48.254552] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.846 [2024-11-22 14:53:48.254971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72600 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72600 /var/tmp/bdevperf.sock 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72600 ']' 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.846 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.846 [2024-11-22 14:53:48.345757] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:33.846 [2024-11-22 14:53:48.345864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72600 ] 00:15:33.846 [2024-11-22 14:53:48.498466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.105 [2024-11-22 14:53:48.579369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.105 [2024-11-22 14:53:48.658656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.053 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.053 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:35.053 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.21R9er9XYg 00:15:35.053 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:35.312 [2024-11-22 14:53:49.903565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.571 nvme0n1 00:15:35.571 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.571 Running I/O for 1 seconds... 00:15:36.509 3812.00 IOPS, 14.89 MiB/s 00:15:36.509 Latency(us) 00:15:36.509 [2024-11-22T14:53:51.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.509 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.509 Verification LBA range: start 0x0 length 0x2000 00:15:36.509 nvme0n1 : 1.03 3825.58 14.94 0.00 0.00 33013.34 6374.87 20971.52 00:15:36.509 [2024-11-22T14:53:51.174Z] =================================================================================================================== 00:15:36.509 [2024-11-22T14:53:51.174Z] Total : 3825.58 14.94 0.00 0.00 33013.34 6374.87 20971.52 00:15:36.509 { 00:15:36.509 "results": [ 00:15:36.509 { 00:15:36.509 "job": "nvme0n1", 00:15:36.509 "core_mask": "0x2", 00:15:36.509 "workload": "verify", 00:15:36.509 "status": "finished", 00:15:36.509 "verify_range": { 00:15:36.509 "start": 0, 00:15:36.509 "length": 8192 00:15:36.509 }, 00:15:36.509 "queue_depth": 128, 00:15:36.509 "io_size": 4096, 00:15:36.509 "runtime": 1.029909, 00:15:36.509 "iops": 3825.580706644956, 00:15:36.509 "mibps": 14.94367463533186, 00:15:36.509 "io_failed": 0, 00:15:36.509 "io_timeout": 0, 00:15:36.509 "avg_latency_us": 33013.34038209506, 00:15:36.509 "min_latency_us": 6374.865454545455, 00:15:36.509 "max_latency_us": 20971.52 00:15:36.509 } 00:15:36.509 ], 00:15:36.509 "core_count": 1 00:15:36.509 } 00:15:36.509 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:36.509 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.509 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.784 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.784 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:36.784 "subsystems": [ 00:15:36.784 { 00:15:36.784 "subsystem": "keyring", 00:15:36.784 "config": [ 00:15:36.784 { 00:15:36.784 "method": "keyring_file_add_key", 00:15:36.784 "params": { 00:15:36.784 "name": "key0", 00:15:36.784 "path": "/tmp/tmp.21R9er9XYg" 00:15:36.784 } 00:15:36.784 } 00:15:36.784 ] 00:15:36.784 }, 00:15:36.784 { 00:15:36.784 "subsystem": "iobuf", 00:15:36.784 "config": [ 00:15:36.784 { 00:15:36.784 "method": "iobuf_set_options", 00:15:36.784 "params": { 00:15:36.784 "small_pool_count": 8192, 00:15:36.784 "large_pool_count": 1024, 00:15:36.784 "small_bufsize": 8192, 00:15:36.784 "large_bufsize": 135168, 00:15:36.784 "enable_numa": false 00:15:36.784 } 00:15:36.784 } 00:15:36.784 ] 00:15:36.784 }, 00:15:36.784 { 00:15:36.784 "subsystem": "sock", 00:15:36.784 "config": [ 00:15:36.784 { 00:15:36.784 "method": "sock_set_default_impl", 00:15:36.784 "params": { 00:15:36.784 "impl_name": "uring" 00:15:36.784 } 00:15:36.784 }, 00:15:36.784 { 00:15:36.784 "method": "sock_impl_set_options", 00:15:36.784 "params": { 00:15:36.784 "impl_name": "ssl", 00:15:36.784 "recv_buf_size": 4096, 00:15:36.784 "send_buf_size": 4096, 00:15:36.784 "enable_recv_pipe": true, 00:15:36.784 "enable_quickack": false, 00:15:36.784 "enable_placement_id": 0, 00:15:36.784 "enable_zerocopy_send_server": true, 00:15:36.784 "enable_zerocopy_send_client": false, 00:15:36.785 "zerocopy_threshold": 0, 00:15:36.785 "tls_version": 0, 00:15:36.785 "enable_ktls": false 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "sock_impl_set_options", 00:15:36.785 "params": { 00:15:36.785 "impl_name": "posix", 00:15:36.785 "recv_buf_size": 2097152, 00:15:36.785 "send_buf_size": 2097152, 00:15:36.785 "enable_recv_pipe": true, 00:15:36.785 "enable_quickack": false, 00:15:36.785 "enable_placement_id": 0, 00:15:36.785 "enable_zerocopy_send_server": true, 00:15:36.785 "enable_zerocopy_send_client": false, 00:15:36.785 "zerocopy_threshold": 0, 00:15:36.785 "tls_version": 0, 00:15:36.785 "enable_ktls": false 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "sock_impl_set_options", 00:15:36.785 "params": { 00:15:36.785 "impl_name": "uring", 00:15:36.785 "recv_buf_size": 2097152, 00:15:36.785 "send_buf_size": 2097152, 00:15:36.785 "enable_recv_pipe": true, 00:15:36.785 "enable_quickack": false, 00:15:36.785 "enable_placement_id": 0, 00:15:36.785 "enable_zerocopy_send_server": false, 00:15:36.785 "enable_zerocopy_send_client": false, 00:15:36.785 "zerocopy_threshold": 0, 00:15:36.785 "tls_version": 0, 00:15:36.785 "enable_ktls": false 00:15:36.785 } 00:15:36.785 } 00:15:36.785 ] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "vmd", 00:15:36.785 "config": [] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "accel", 00:15:36.785 "config": [ 00:15:36.785 { 00:15:36.785 "method": "accel_set_options", 00:15:36.785 "params": { 00:15:36.785 "small_cache_size": 128, 00:15:36.785 "large_cache_size": 16, 00:15:36.785 "task_count": 2048, 00:15:36.785 "sequence_count": 2048, 00:15:36.785 "buf_count": 2048 00:15:36.785 } 00:15:36.785 } 00:15:36.785 ] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "bdev", 00:15:36.785 "config": [ 00:15:36.785 { 00:15:36.785 "method": "bdev_set_options", 00:15:36.785 "params": { 00:15:36.785 "bdev_io_pool_size": 65535, 00:15:36.785 "bdev_io_cache_size": 256, 00:15:36.785 "bdev_auto_examine": true, 00:15:36.785 "iobuf_small_cache_size": 128, 00:15:36.785 "iobuf_large_cache_size": 16 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_raid_set_options", 00:15:36.785 "params": { 00:15:36.785 "process_window_size_kb": 1024, 00:15:36.785 "process_max_bandwidth_mb_sec": 0 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_iscsi_set_options", 00:15:36.785 "params": { 00:15:36.785 "timeout_sec": 30 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_nvme_set_options", 00:15:36.785 "params": { 00:15:36.785 "action_on_timeout": "none", 00:15:36.785 "timeout_us": 0, 00:15:36.785 "timeout_admin_us": 0, 00:15:36.785 "keep_alive_timeout_ms": 10000, 00:15:36.785 "arbitration_burst": 0, 00:15:36.785 "low_priority_weight": 0, 00:15:36.785 "medium_priority_weight": 0, 00:15:36.785 "high_priority_weight": 0, 00:15:36.785 "nvme_adminq_poll_period_us": 10000, 00:15:36.785 "nvme_ioq_poll_period_us": 0, 00:15:36.785 "io_queue_requests": 0, 00:15:36.785 "delay_cmd_submit": true, 00:15:36.785 "transport_retry_count": 4, 00:15:36.785 "bdev_retry_count": 3, 00:15:36.785 "transport_ack_timeout": 0, 00:15:36.785 "ctrlr_loss_timeout_sec": 0, 00:15:36.785 "reconnect_delay_sec": 0, 00:15:36.785 "fast_io_fail_timeout_sec": 0, 00:15:36.785 "disable_auto_failback": false, 00:15:36.785 "generate_uuids": false, 00:15:36.785 "transport_tos": 0, 00:15:36.785 "nvme_error_stat": false, 00:15:36.785 "rdma_srq_size": 0, 00:15:36.785 "io_path_stat": false, 00:15:36.785 "allow_accel_sequence": false, 00:15:36.785 "rdma_max_cq_size": 0, 00:15:36.785 "rdma_cm_event_timeout_ms": 0, 00:15:36.785 "dhchap_digests": [ 00:15:36.785 "sha256", 00:15:36.785 "sha384", 00:15:36.785 "sha512" 00:15:36.785 ], 00:15:36.785 "dhchap_dhgroups": [ 00:15:36.785 "null", 00:15:36.785 "ffdhe2048", 00:15:36.785 "ffdhe3072", 00:15:36.785 "ffdhe4096", 00:15:36.785 "ffdhe6144", 00:15:36.785 "ffdhe8192" 00:15:36.785 ] 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_nvme_set_hotplug", 00:15:36.785 "params": { 00:15:36.785 "period_us": 100000, 00:15:36.785 "enable": false 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_malloc_create", 00:15:36.785 "params": { 00:15:36.785 "name": "malloc0", 00:15:36.785 "num_blocks": 8192, 00:15:36.785 "block_size": 4096, 00:15:36.785 "physical_block_size": 4096, 00:15:36.785 "uuid": "e7ef5ecd-0edd-4e02-b0b8-36745e415835", 00:15:36.785 "optimal_io_boundary": 0, 00:15:36.785 "md_size": 0, 00:15:36.785 "dif_type": 0, 00:15:36.785 "dif_is_head_of_md": false, 00:15:36.785 "dif_pi_format": 0 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "bdev_wait_for_examine" 00:15:36.785 } 00:15:36.785 ] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "nbd", 00:15:36.785 "config": [] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "scheduler", 00:15:36.785 "config": [ 00:15:36.785 { 00:15:36.785 "method": "framework_set_scheduler", 00:15:36.785 "params": { 00:15:36.785 "name": "static" 00:15:36.785 } 00:15:36.785 } 00:15:36.785 ] 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "subsystem": "nvmf", 00:15:36.785 "config": [ 00:15:36.785 { 00:15:36.785 "method": "nvmf_set_config", 00:15:36.785 "params": { 00:15:36.785 "discovery_filter": "match_any", 00:15:36.785 "admin_cmd_passthru": { 00:15:36.785 "identify_ctrlr": false 00:15:36.785 }, 00:15:36.785 "dhchap_digests": [ 00:15:36.785 "sha256", 00:15:36.785 "sha384", 00:15:36.785 "sha512" 00:15:36.785 ], 00:15:36.785 "dhchap_dhgroups": [ 00:15:36.785 "null", 00:15:36.785 "ffdhe2048", 00:15:36.785 "ffdhe3072", 00:15:36.785 "ffdhe4096", 00:15:36.785 "ffdhe6144", 00:15:36.785 "ffdhe8192" 00:15:36.785 ] 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "nvmf_set_max_subsystems", 00:15:36.785 "params": { 00:15:36.785 "max_subsystems": 1024 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "nvmf_set_crdt", 00:15:36.785 "params": { 00:15:36.785 "crdt1": 0, 00:15:36.785 "crdt2": 0, 00:15:36.785 "crdt3": 0 00:15:36.785 } 00:15:36.785 }, 00:15:36.785 { 00:15:36.785 "method": "nvmf_create_transport", 00:15:36.785 "params": { 00:15:36.785 "trtype": "TCP", 00:15:36.785 "max_queue_depth": 128, 00:15:36.785 "max_io_qpairs_per_ctrlr": 127, 00:15:36.785 "in_capsule_data_size": 4096, 00:15:36.785 "max_io_size": 131072, 00:15:36.786 "io_unit_size": 131072, 00:15:36.786 "max_aq_depth": 128, 00:15:36.786 "num_shared_buffers": 511, 00:15:36.786 "buf_cache_size": 4294967295, 00:15:36.786 "dif_insert_or_strip": false, 00:15:36.786 "zcopy": false, 00:15:36.786 "c2h_success": false, 00:15:36.786 "sock_priority": 0, 00:15:36.786 "abort_timeout_sec": 1, 00:15:36.786 "ack_timeout": 0, 00:15:36.786 "data_wr_pool_size": 0 00:15:36.786 } 00:15:36.786 }, 00:15:36.786 { 00:15:36.786 "method": "nvmf_create_subsystem", 00:15:36.786 "params": { 00:15:36.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.786 "allow_any_host": false, 00:15:36.786 "serial_number": "00000000000000000000", 00:15:36.786 "model_number": "SPDK bdev Controller", 00:15:36.786 "max_namespaces": 32, 00:15:36.786 "min_cntlid": 1, 00:15:36.786 "max_cntlid": 65519, 00:15:36.786 "ana_reporting": false 00:15:36.786 } 00:15:36.786 }, 00:15:36.786 { 00:15:36.786 "method": "nvmf_subsystem_add_host", 00:15:36.786 "params": { 00:15:36.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.786 "host": "nqn.2016-06.io.spdk:host1", 00:15:36.786 "psk": "key0" 00:15:36.786 } 00:15:36.786 }, 00:15:36.786 { 00:15:36.786 "method": "nvmf_subsystem_add_ns", 00:15:36.786 "params": { 00:15:36.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.786 "namespace": { 00:15:36.786 "nsid": 1, 00:15:36.786 "bdev_name": "malloc0", 00:15:36.786 "nguid": "E7EF5ECD0EDD4E02B0B836745E415835", 00:15:36.786 "uuid": "e7ef5ecd-0edd-4e02-b0b8-36745e415835", 00:15:36.786 "no_auto_visible": false 00:15:36.786 } 00:15:36.786 } 00:15:36.786 }, 00:15:36.786 { 00:15:36.786 "method": "nvmf_subsystem_add_listener", 00:15:36.786 "params": { 00:15:36.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.786 "listen_address": { 00:15:36.786 "trtype": "TCP", 00:15:36.786 "adrfam": "IPv4", 00:15:36.786 "traddr": "10.0.0.3", 00:15:36.786 "trsvcid": "4420" 00:15:36.786 }, 00:15:36.786 "secure_channel": false, 00:15:36.786 "sock_impl": "ssl" 00:15:36.786 } 00:15:36.786 } 00:15:36.786 ] 00:15:36.786 } 00:15:36.786 ] 00:15:36.786 }' 00:15:36.786 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:37.055 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:37.055 "subsystems": [ 00:15:37.055 { 00:15:37.055 "subsystem": "keyring", 00:15:37.055 "config": [ 00:15:37.055 { 00:15:37.055 "method": "keyring_file_add_key", 00:15:37.055 "params": { 00:15:37.055 "name": "key0", 00:15:37.055 "path": "/tmp/tmp.21R9er9XYg" 00:15:37.055 } 00:15:37.055 } 00:15:37.055 ] 00:15:37.055 }, 00:15:37.055 { 00:15:37.055 "subsystem": "iobuf", 00:15:37.055 "config": [ 00:15:37.055 { 00:15:37.055 "method": "iobuf_set_options", 00:15:37.055 "params": { 00:15:37.055 "small_pool_count": 8192, 00:15:37.055 "large_pool_count": 1024, 00:15:37.055 "small_bufsize": 8192, 00:15:37.055 "large_bufsize": 135168, 00:15:37.055 "enable_numa": false 00:15:37.055 } 00:15:37.055 } 00:15:37.055 ] 00:15:37.055 }, 00:15:37.055 { 00:15:37.055 "subsystem": "sock", 00:15:37.055 "config": [ 00:15:37.055 { 00:15:37.055 "method": "sock_set_default_impl", 00:15:37.055 "params": { 00:15:37.055 "impl_name": "uring" 00:15:37.055 } 00:15:37.055 }, 00:15:37.055 { 00:15:37.055 "method": "sock_impl_set_options", 00:15:37.055 "params": { 00:15:37.055 "impl_name": "ssl", 00:15:37.055 "recv_buf_size": 4096, 00:15:37.055 "send_buf_size": 4096, 00:15:37.055 "enable_recv_pipe": true, 00:15:37.055 "enable_quickack": false, 00:15:37.055 "enable_placement_id": 0, 00:15:37.055 "enable_zerocopy_send_server": true, 00:15:37.055 "enable_zerocopy_send_client": false, 00:15:37.055 "zerocopy_threshold": 0, 00:15:37.055 "tls_version": 0, 00:15:37.055 "enable_ktls": false 00:15:37.055 } 00:15:37.055 }, 00:15:37.055 { 00:15:37.055 "method": "sock_impl_set_options", 00:15:37.055 "params": { 00:15:37.055 "impl_name": "posix", 00:15:37.055 "recv_buf_size": 2097152, 00:15:37.055 "send_buf_size": 2097152, 00:15:37.055 "enable_recv_pipe": true, 00:15:37.055 "enable_quickack": false, 00:15:37.055 "enable_placement_id": 0, 00:15:37.055 "enable_zerocopy_send_server": true, 00:15:37.055 "enable_zerocopy_send_client": false, 00:15:37.055 "zerocopy_threshold": 0, 00:15:37.055 "tls_version": 0, 00:15:37.055 "enable_ktls": false 00:15:37.055 } 00:15:37.055 }, 00:15:37.055 { 00:15:37.055 "method": "sock_impl_set_options", 00:15:37.055 "params": { 00:15:37.055 "impl_name": "uring", 00:15:37.055 "recv_buf_size": 2097152, 00:15:37.055 "send_buf_size": 2097152, 00:15:37.055 "enable_recv_pipe": true, 00:15:37.056 "enable_quickack": false, 00:15:37.056 "enable_placement_id": 0, 00:15:37.056 "enable_zerocopy_send_server": false, 00:15:37.056 "enable_zerocopy_send_client": false, 00:15:37.056 "zerocopy_threshold": 0, 00:15:37.056 "tls_version": 0, 00:15:37.056 "enable_ktls": false 00:15:37.056 } 00:15:37.056 } 00:15:37.056 ] 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "subsystem": "vmd", 00:15:37.056 "config": [] 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "subsystem": "accel", 00:15:37.056 "config": [ 00:15:37.056 { 00:15:37.056 "method": "accel_set_options", 00:15:37.056 "params": { 00:15:37.056 "small_cache_size": 128, 00:15:37.056 "large_cache_size": 16, 00:15:37.056 "task_count": 2048, 00:15:37.056 "sequence_count": 2048, 00:15:37.056 "buf_count": 2048 00:15:37.056 } 00:15:37.056 } 00:15:37.056 ] 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "subsystem": "bdev", 00:15:37.056 "config": [ 00:15:37.056 { 00:15:37.056 "method": "bdev_set_options", 00:15:37.056 "params": { 00:15:37.056 "bdev_io_pool_size": 65535, 00:15:37.056 "bdev_io_cache_size": 256, 00:15:37.056 "bdev_auto_examine": true, 00:15:37.056 "iobuf_small_cache_size": 128, 00:15:37.056 "iobuf_large_cache_size": 16 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_raid_set_options", 00:15:37.056 "params": { 00:15:37.056 "process_window_size_kb": 1024, 00:15:37.056 "process_max_bandwidth_mb_sec": 0 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_iscsi_set_options", 00:15:37.056 "params": { 00:15:37.056 "timeout_sec": 30 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_nvme_set_options", 00:15:37.056 "params": { 00:15:37.056 "action_on_timeout": "none", 00:15:37.056 "timeout_us": 0, 00:15:37.056 "timeout_admin_us": 0, 00:15:37.056 "keep_alive_timeout_ms": 10000, 00:15:37.056 "arbitration_burst": 0, 00:15:37.056 "low_priority_weight": 0, 00:15:37.056 "medium_priority_weight": 0, 00:15:37.056 "high_priority_weight": 0, 00:15:37.056 "nvme_adminq_poll_period_us": 10000, 00:15:37.056 "nvme_ioq_poll_period_us": 0, 00:15:37.056 "io_queue_requests": 512, 00:15:37.056 "delay_cmd_submit": true, 00:15:37.056 "transport_retry_count": 4, 00:15:37.056 "bdev_retry_count": 3, 00:15:37.056 "transport_ack_timeout": 0, 00:15:37.056 "ctrlr_loss_timeout_sec": 0, 00:15:37.056 "reconnect_delay_sec": 0, 00:15:37.056 "fast_io_fail_timeout_sec": 0, 00:15:37.056 "disable_auto_failback": false, 00:15:37.056 "generate_uuids": false, 00:15:37.056 "transport_tos": 0, 00:15:37.056 "nvme_error_stat": false, 00:15:37.056 "rdma_srq_size": 0, 00:15:37.056 "io_path_stat": false, 00:15:37.056 "allow_accel_sequence": false, 00:15:37.056 "rdma_max_cq_size": 0, 00:15:37.056 "rdma_cm_event_timeout_ms": 0, 00:15:37.056 "dhchap_digests": [ 00:15:37.056 "sha256", 00:15:37.056 "sha384", 00:15:37.056 "sha512" 00:15:37.056 ], 00:15:37.056 "dhchap_dhgroups": [ 00:15:37.056 "null", 00:15:37.056 "ffdhe2048", 00:15:37.056 "ffdhe3072", 00:15:37.056 "ffdhe4096", 00:15:37.056 "ffdhe6144", 00:15:37.056 "ffdhe8192" 00:15:37.056 ] 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_nvme_attach_controller", 00:15:37.056 "params": { 00:15:37.056 "name": "nvme0", 00:15:37.056 "trtype": "TCP", 00:15:37.056 "adrfam": "IPv4", 00:15:37.056 "traddr": "10.0.0.3", 00:15:37.056 "trsvcid": "4420", 00:15:37.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.056 "prchk_reftag": false, 00:15:37.056 "prchk_guard": false, 00:15:37.056 "ctrlr_loss_timeout_sec": 0, 00:15:37.056 "reconnect_delay_sec": 0, 00:15:37.056 "fast_io_fail_timeout_sec": 0, 00:15:37.056 "psk": "key0", 00:15:37.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.056 "hdgst": false, 00:15:37.056 "ddgst": false, 00:15:37.056 "multipath": "multipath" 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_nvme_set_hotplug", 00:15:37.056 "params": { 00:15:37.056 "period_us": 100000, 00:15:37.056 "enable": false 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_enable_histogram", 00:15:37.056 "params": { 00:15:37.056 "name": "nvme0n1", 00:15:37.056 "enable": true 00:15:37.056 } 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "method": "bdev_wait_for_examine" 00:15:37.056 } 00:15:37.056 ] 00:15:37.056 }, 00:15:37.056 { 00:15:37.056 "subsystem": "nbd", 00:15:37.056 "config": [] 00:15:37.056 } 00:15:37.056 ] 00:15:37.056 }' 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72600 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72600 ']' 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72600 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.056 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72600 00:15:37.315 killing process with pid 72600 00:15:37.315 Received shutdown signal, test time was about 1.000000 seconds 00:15:37.315 00:15:37.315 Latency(us) 00:15:37.315 [2024-11-22T14:53:51.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.315 [2024-11-22T14:53:51.980Z] =================================================================================================================== 00:15:37.315 [2024-11-22T14:53:51.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72600' 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72600 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72600 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72568 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72568 ']' 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72568 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:37.315 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.574 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72568 00:15:37.574 killing process with pid 72568 00:15:37.574 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.574 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.574 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72568' 00:15:37.574 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72568 00:15:37.574 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72568 00:15:37.834 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:37.834 "subsystems": [ 00:15:37.834 { 00:15:37.834 "subsystem": "keyring", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "keyring_file_add_key", 00:15:37.834 "params": { 00:15:37.834 "name": "key0", 00:15:37.834 "path": "/tmp/tmp.21R9er9XYg" 00:15:37.834 } 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "iobuf", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "iobuf_set_options", 00:15:37.834 "params": { 00:15:37.834 "small_pool_count": 8192, 00:15:37.834 "large_pool_count": 1024, 00:15:37.834 "small_bufsize": 8192, 00:15:37.834 "large_bufsize": 135168, 00:15:37.834 "enable_numa": false 00:15:37.834 } 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "sock", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "sock_set_default_impl", 00:15:37.834 "params": { 00:15:37.834 "impl_name": "uring" 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "sock_impl_set_options", 00:15:37.834 "params": { 00:15:37.834 "impl_name": "ssl", 00:15:37.834 "recv_buf_size": 4096, 00:15:37.834 "send_buf_size": 4096, 00:15:37.834 "enable_recv_pipe": true, 00:15:37.834 "enable_quickack": false, 00:15:37.834 "enable_placement_id": 0, 00:15:37.834 "enable_zerocopy_send_server": true, 00:15:37.834 "enable_zerocopy_send_client": false, 00:15:37.834 "zerocopy_threshold": 0, 00:15:37.834 "tls_version": 0, 00:15:37.834 "enable_ktls": false 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "sock_impl_set_options", 00:15:37.834 "params": { 00:15:37.834 "impl_name": "posix", 00:15:37.834 "recv_buf_size": 2097152, 00:15:37.834 "send_buf_size": 2097152, 00:15:37.834 "enable_recv_pipe": true, 00:15:37.834 "enable_quickack": false, 00:15:37.834 "enable_placement_id": 0, 00:15:37.834 "enable_zerocopy_send_server": true, 00:15:37.834 "enable_zerocopy_send_client": false, 00:15:37.834 "zerocopy_threshold": 0, 00:15:37.834 "tls_version": 0, 00:15:37.834 "enable_ktls": false 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "sock_impl_set_options", 00:15:37.834 "params": { 00:15:37.834 "impl_name": "uring", 00:15:37.834 "recv_buf_size": 2097152, 00:15:37.834 "send_buf_size": 2097152, 00:15:37.834 "enable_recv_pipe": true, 00:15:37.834 "enable_quickack": false, 00:15:37.834 "enable_placement_id": 0, 00:15:37.834 "enable_zerocopy_send_server": false, 00:15:37.834 "enable_zerocopy_send_client": false, 00:15:37.834 "zerocopy_threshold": 0, 00:15:37.834 "tls_version": 0, 00:15:37.834 "enable_ktls": false 00:15:37.834 } 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "vmd", 00:15:37.834 "config": [] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "accel", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "accel_set_options", 00:15:37.834 "params": { 00:15:37.834 "small_cache_size": 128, 00:15:37.834 "large_cache_size": 16, 00:15:37.834 "task_count": 2048, 00:15:37.834 "sequence_count": 2048, 00:15:37.834 "buf_count": 2048 00:15:37.834 } 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "bdev", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "bdev_set_options", 00:15:37.834 "params": { 00:15:37.834 "bdev_io_pool_size": 65535, 00:15:37.834 "bdev_io_cache_size": 256, 00:15:37.834 "bdev_auto_examine": true, 00:15:37.834 "iobuf_small_cache_size": 128, 00:15:37.834 "iobuf_large_cache_size": 16 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_raid_set_options", 00:15:37.834 "params": { 00:15:37.834 "process_window_size_kb": 1024, 00:15:37.834 "process_max_bandwidth_mb_sec": 0 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_iscsi_set_options", 00:15:37.834 "params": { 00:15:37.834 "timeout_sec": 30 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_nvme_set_options", 00:15:37.834 "params": { 00:15:37.834 "action_on_timeout": "none", 00:15:37.834 "timeout_us": 0, 00:15:37.834 "timeout_admin_us": 0, 00:15:37.834 "keep_alive_timeout_ms": 10000, 00:15:37.834 "arbitration_burst": 0, 00:15:37.834 "low_priority_weight": 0, 00:15:37.834 "medium_priority_weight": 0, 00:15:37.834 "high_priority_weight": 0, 00:15:37.834 "nvme_adminq_poll_period_us": 10000, 00:15:37.834 "nvme_ioq_poll_period_us": 0, 00:15:37.834 "io_queue_requests": 0, 00:15:37.834 "delay_cmd_submit": true, 00:15:37.834 "transport_retry_count": 4, 00:15:37.834 "bdev_retry_count": 3, 00:15:37.834 "transport_ack_timeout": 0, 00:15:37.834 "ctrlr_loss_timeout_sec": 0, 00:15:37.834 "reconnect_delay_sec": 0, 00:15:37.834 "fast_io_fail_timeout_sec": 0, 00:15:37.834 "disable_auto_failback": false, 00:15:37.834 "generate_uuids": false, 00:15:37.834 "transport_tos": 0, 00:15:37.834 "nvme_error_stat": false, 00:15:37.834 "rdma_srq_size": 0, 00:15:37.834 "io_path_stat": false, 00:15:37.834 "allow_accel_sequence": false, 00:15:37.834 "rdma_max_cq_size": 0, 00:15:37.834 "rdma_cm_event_timeout_ms": 0, 00:15:37.834 "dhchap_digests": [ 00:15:37.834 "sha256", 00:15:37.834 "sha384", 00:15:37.834 "sha512" 00:15:37.834 ], 00:15:37.834 "dhchap_dhgroups": [ 00:15:37.834 "null", 00:15:37.834 "ffdhe2048", 00:15:37.834 "ffdhe3072", 00:15:37.834 "ffdhe4096", 00:15:37.834 "ffdhe6144", 00:15:37.834 "ffdhe8192" 00:15:37.834 ] 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_nvme_set_hotplug", 00:15:37.834 "params": { 00:15:37.834 "period_us": 100000, 00:15:37.834 "enable": false 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_malloc_create", 00:15:37.834 "params": { 00:15:37.834 "name": "malloc0", 00:15:37.834 "num_blocks": 8192, 00:15:37.834 "block_size": 4096, 00:15:37.834 "physical_block_size": 4096, 00:15:37.834 "uuid": "e7ef5ecd-0edd-4e02-b0b8-36745e415835", 00:15:37.834 "optimal_io_boundary": 0, 00:15:37.834 "md_size": 0, 00:15:37.834 "dif_type": 0, 00:15:37.834 "dif_is_head_of_md": false, 00:15:37.834 "dif_pi_format": 0 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "bdev_wait_for_examine" 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "nbd", 00:15:37.834 "config": [] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "scheduler", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "framework_set_scheduler", 00:15:37.834 "params": { 00:15:37.834 "name": "static" 00:15:37.834 } 00:15:37.834 } 00:15:37.834 ] 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "subsystem": "nvmf", 00:15:37.834 "config": [ 00:15:37.834 { 00:15:37.834 "method": "nvmf_set_config", 00:15:37.834 "params": { 00:15:37.834 "discovery_filter": "match_any", 00:15:37.834 "admin_cmd_passthru": { 00:15:37.834 "identify_ctrlr": false 00:15:37.834 }, 00:15:37.834 "dhchap_digests": [ 00:15:37.834 "sha256", 00:15:37.834 "sha384", 00:15:37.834 "sha512" 00:15:37.834 ], 00:15:37.834 "dhchap_dhgroups": [ 00:15:37.834 "null", 00:15:37.834 "ffdhe2048", 00:15:37.834 "ffdhe3072", 00:15:37.834 "ffdhe4096", 00:15:37.834 "ffdhe6144", 00:15:37.834 "ffdhe8192" 00:15:37.834 ] 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "nvmf_set_max_subsyste 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:37.834 ms", 00:15:37.834 "params": { 00:15:37.834 "max_subsystems": 1024 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "nvmf_set_crdt", 00:15:37.834 "params": { 00:15:37.834 "crdt1": 0, 00:15:37.834 "crdt2": 0, 00:15:37.834 "crdt3": 0 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "nvmf_create_transport", 00:15:37.834 "params": { 00:15:37.834 "trtype": "TCP", 00:15:37.834 "max_queue_depth": 128, 00:15:37.834 "max_io_qpairs_per_ctrlr": 127, 00:15:37.834 "in_capsule_data_size": 4096, 00:15:37.834 "max_io_size": 131072, 00:15:37.834 "io_unit_size": 131072, 00:15:37.834 "max_aq_depth": 128, 00:15:37.834 "num_shared_buffers": 511, 00:15:37.834 "buf_cache_size": 4294967295, 00:15:37.834 "dif_insert_or_strip": false, 00:15:37.834 "zcopy": false, 00:15:37.834 "c2h_success": false, 00:15:37.834 "sock_priority": 0, 00:15:37.834 "abort_timeout_sec": 1, 00:15:37.834 "ack_timeout": 0, 00:15:37.834 "data_wr_pool_size": 0 00:15:37.834 } 00:15:37.834 }, 00:15:37.834 { 00:15:37.834 "method": "nvmf_create_subsystem", 00:15:37.834 "params": { 00:15:37.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.835 "allow_any_host": false, 00:15:37.835 "serial_number": "00000000000000000000", 00:15:37.835 "model_number": "SPDK bdev Controller", 00:15:37.835 "max_namespaces": 32, 00:15:37.835 "min_cntlid": 1, 00:15:37.835 "max_cntlid": 65519, 00:15:37.835 "ana_reporting": false 00:15:37.835 } 00:15:37.835 }, 00:15:37.835 { 00:15:37.835 "method": "nvmf_subsystem_add_host", 00:15:37.835 "params": { 00:15:37.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.835 "host": "nqn.2016-06.io.spdk:host1", 00:15:37.835 "psk": "key0" 00:15:37.835 } 00:15:37.835 }, 00:15:37.835 { 00:15:37.835 "method": "nvmf_subsystem_add_ns", 00:15:37.835 "params": { 00:15:37.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.835 "namespace": { 00:15:37.835 "nsid": 1, 00:15:37.835 "bdev_name": "malloc0", 00:15:37.835 "nguid": "E7EF5ECD0EDD4E02B0B836745E415835", 00:15:37.835 "uuid": "e7ef5ecd-0edd-4e02-b0b8-36745e415835", 00:15:37.835 "no_auto_visible": false 00:15:37.835 } 00:15:37.835 } 00:15:37.835 }, 00:15:37.835 { 00:15:37.835 "method": "nvmf_subsystem_add_listener", 00:15:37.835 "params": { 00:15:37.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.835 "listen_address": { 00:15:37.835 "trtype": "TCP", 00:15:37.835 "adrfam": "IPv4", 00:15:37.835 "traddr": "10.0.0.3", 00:15:37.835 "trsvcid": "4420" 00:15:37.835 }, 00:15:37.835 "secure_channel": false, 00:15:37.835 "sock_impl": "ssl" 00:15:37.835 } 00:15:37.835 } 00:15:37.835 ] 00:15:37.835 } 00:15:37.835 ] 00:15:37.835 }' 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72661 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72661 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72661 ']' 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.835 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.835 [2024-11-22 14:53:52.354847] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:37.835 [2024-11-22 14:53:52.355199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.093 [2024-11-22 14:53:52.509800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.093 [2024-11-22 14:53:52.578768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.093 [2024-11-22 14:53:52.579017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.093 [2024-11-22 14:53:52.579199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.093 [2024-11-22 14:53:52.579355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.093 [2024-11-22 14:53:52.579622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.093 [2024-11-22 14:53:52.580224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.352 [2024-11-22 14:53:52.771468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.352 [2024-11-22 14:53:52.863562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.352 [2024-11-22 14:53:52.895556] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.352 [2024-11-22 14:53:52.895986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72693 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72693 /var/tmp/bdevperf.sock 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72693 ']' 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.918 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:38.918 "subsystems": [ 00:15:38.918 { 00:15:38.918 "subsystem": "keyring", 00:15:38.918 "config": [ 00:15:38.918 { 00:15:38.918 "method": "keyring_file_add_key", 00:15:38.918 "params": { 00:15:38.918 "name": "key0", 00:15:38.918 "path": "/tmp/tmp.21R9er9XYg" 00:15:38.918 } 00:15:38.918 } 00:15:38.918 ] 00:15:38.918 }, 00:15:38.918 { 00:15:38.918 "subsystem": "iobuf", 00:15:38.918 "config": [ 00:15:38.918 { 00:15:38.918 "method": "iobuf_set_options", 00:15:38.918 "params": { 00:15:38.918 "small_pool_count": 8192, 00:15:38.918 "large_pool_count": 1024, 00:15:38.918 "small_bufsize": 8192, 00:15:38.918 "large_bufsize": 135168, 00:15:38.918 "enable_numa": false 00:15:38.918 } 00:15:38.918 } 00:15:38.918 ] 00:15:38.918 }, 00:15:38.918 { 00:15:38.918 "subsystem": "sock", 00:15:38.918 "config": [ 00:15:38.918 { 00:15:38.918 "method": "sock_set_default_impl", 00:15:38.918 "params": { 00:15:38.918 "impl_name": "uring" 00:15:38.918 } 00:15:38.918 }, 00:15:38.918 { 00:15:38.918 "method": "sock_impl_set_options", 00:15:38.918 "params": { 00:15:38.918 "impl_name": "ssl", 00:15:38.918 "recv_buf_size": 4096, 00:15:38.918 "send_buf_size": 4096, 00:15:38.918 "enable_recv_pipe": true, 00:15:38.918 "enable_quickack": false, 00:15:38.918 "enable_placement_id": 0, 00:15:38.918 "enable_zerocopy_send_server": true, 00:15:38.918 "enable_zerocopy_send_client": false, 00:15:38.918 "zerocopy_threshold": 0, 00:15:38.918 "tls_version": 0, 00:15:38.918 "enable_ktls": false 00:15:38.918 } 00:15:38.918 }, 00:15:38.918 { 00:15:38.918 "method": "sock_impl_set_options", 00:15:38.919 "params": { 00:15:38.919 "impl_name": "posix", 00:15:38.919 "recv_buf_size": 2097152, 00:15:38.919 "send_buf_size": 2097152, 00:15:38.919 "enable_recv_pipe": true, 00:15:38.919 "enable_quickack": false, 00:15:38.919 "enable_placement_id": 0, 00:15:38.919 "enable_zerocopy_send_server": true, 00:15:38.919 "enable_zerocopy_send_client": false, 00:15:38.919 "zerocopy_threshold": 0, 00:15:38.919 "tls_version": 0, 00:15:38.919 "enable_ktls": false 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "sock_impl_set_options", 00:15:38.919 "params": { 00:15:38.919 "impl_name": "uring", 00:15:38.919 "recv_buf_size": 2097152, 00:15:38.919 "send_buf_size": 2097152, 00:15:38.919 "enable_recv_pipe": true, 00:15:38.919 "enable_quickack": false, 00:15:38.919 "enable_placement_id": 0, 00:15:38.919 "enable_zerocopy_send_server": false, 00:15:38.919 "enable_zerocopy_send_client": false, 00:15:38.919 "zerocopy_threshold": 0, 00:15:38.919 "tls_version": 0, 00:15:38.919 "enable_ktls": false 00:15:38.919 } 00:15:38.919 } 00:15:38.919 ] 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "subsystem": "vmd", 00:15:38.919 "config": [] 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "subsystem": "accel", 00:15:38.919 "config": [ 00:15:38.919 { 00:15:38.919 "method": "accel_set_options", 00:15:38.919 "params": { 00:15:38.919 "small_cache_size": 128, 00:15:38.919 "large_cache_size": 16, 00:15:38.919 "task_count": 2048, 00:15:38.919 "sequence_count": 2048, 00:15:38.919 "buf_count": 2048 00:15:38.919 } 00:15:38.919 } 00:15:38.919 ] 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "subsystem": "bdev", 00:15:38.919 "config": [ 00:15:38.919 { 00:15:38.919 "method": "bdev_set_options", 00:15:38.919 "params": { 00:15:38.919 "bdev_io_pool_size": 65535, 00:15:38.919 "bdev_io_cache_size": 256, 00:15:38.919 "bdev_auto_examine": true, 00:15:38.919 "iobuf_small_cache_size": 128, 00:15:38.919 "iobuf_large_cache_size": 16 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_raid_set_options", 00:15:38.919 "params": { 00:15:38.919 "process_window_size_kb": 1024, 00:15:38.919 "process_max_bandwidth_mb_sec": 0 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_iscsi_set_options", 00:15:38.919 "params": { 00:15:38.919 "timeout_sec": 30 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_nvme_set_options", 00:15:38.919 "params": { 00:15:38.919 "action_on_timeout": "none", 00:15:38.919 "timeout_us": 0, 00:15:38.919 "timeout_admin_us": 0, 00:15:38.919 "keep_alive_timeout_ms": 10000, 00:15:38.919 "arbitration_burst": 0, 00:15:38.919 "low_priority_weight": 0, 00:15:38.919 "medium_priority_weight": 0, 00:15:38.919 "high_priority_weight": 0, 00:15:38.919 "nvme_adminq_poll_period_us": 10000, 00:15:38.919 "nvme_ioq_poll_period_us": 0, 00:15:38.919 "io_queue_requests": 512, 00:15:38.919 "delay_cmd_submit": true, 00:15:38.919 "transport_retry_count": 4, 00:15:38.919 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.919 "bdev_retry_count": 3, 00:15:38.919 "transport_ack_timeout": 0, 00:15:38.919 "ctrlr_loss_timeout_sec": 0, 00:15:38.919 "reconnect_delay_sec": 0, 00:15:38.919 "fast_io_fail_timeout_sec": 0, 00:15:38.919 "disable_auto_failback": false, 00:15:38.919 "generate_uuids": false, 00:15:38.919 "transport_tos": 0, 00:15:38.919 "nvme_error_stat": false, 00:15:38.919 "rdma_srq_size": 0, 00:15:38.919 "io_path_stat": false, 00:15:38.919 "allow_accel_sequence": false, 00:15:38.919 "rdma_max_cq_size": 0, 00:15:38.919 "rdma_cm_event_timeout_ms": 0, 00:15:38.919 "dhchap_digests": [ 00:15:38.919 "sha256", 00:15:38.919 "sha384", 00:15:38.919 "sha512" 00:15:38.919 ], 00:15:38.919 "dhchap_dhgroups": [ 00:15:38.919 "null", 00:15:38.919 "ffdhe2048", 00:15:38.919 "ffdhe3072", 00:15:38.919 "ffdhe4096", 00:15:38.919 "ffdhe6144", 00:15:38.919 "ffdhe8192" 00:15:38.919 ] 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_nvme_attach_controller", 00:15:38.919 "params": { 00:15:38.919 "name": "nvme0", 00:15:38.919 "trtype": "TCP", 00:15:38.919 "adrfam": "IPv4", 00:15:38.919 "traddr": "10.0.0.3", 00:15:38.919 "trsvcid": "4420", 00:15:38.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.919 "prchk_reftag": false, 00:15:38.919 "prchk_guard": false, 00:15:38.919 "ctrlr_loss_timeout_sec": 0, 00:15:38.919 "reconnect_delay_sec": 0, 00:15:38.919 "fast_io_fail_timeout_sec": 0, 00:15:38.919 "psk": "key0", 00:15:38.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.919 "hdgst": false, 00:15:38.919 "ddgst": false, 00:15:38.919 "multipath": "multipath" 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_nvme_set_hotplug", 00:15:38.919 "params": { 00:15:38.919 "period_us": 100000, 00:15:38.919 "enable": false 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_enable_histogram", 00:15:38.919 "params": { 00:15:38.919 "name": "nvme0n1", 00:15:38.919 "enable": true 00:15:38.919 } 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "method": "bdev_wait_for_examine" 00:15:38.919 } 00:15:38.919 ] 00:15:38.919 }, 00:15:38.919 { 00:15:38.919 "subsystem": "nbd", 00:15:38.919 "config": [] 00:15:38.919 } 00:15:38.919 ] 00:15:38.919 }' 00:15:38.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.919 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.919 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.919 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.919 [2024-11-22 14:53:53.445265] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:38.919 [2024-11-22 14:53:53.445647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72693 ] 00:15:39.178 [2024-11-22 14:53:53.601616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.178 [2024-11-22 14:53:53.674045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.178 [2024-11-22 14:53:53.830799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.437 [2024-11-22 14:53:53.888447] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.005 14:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.264 Running I/O for 1 seconds... 00:15:41.197 4258.00 IOPS, 16.63 MiB/s 00:15:41.198 Latency(us) 00:15:41.198 [2024-11-22T14:53:55.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.198 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:41.198 Verification LBA range: start 0x0 length 0x2000 00:15:41.198 nvme0n1 : 1.02 4314.71 16.85 0.00 0.00 29368.63 5689.72 24784.52 00:15:41.198 [2024-11-22T14:53:55.863Z] =================================================================================================================== 00:15:41.198 [2024-11-22T14:53:55.863Z] Total : 4314.71 16.85 0.00 0.00 29368.63 5689.72 24784.52 00:15:41.198 { 00:15:41.198 "results": [ 00:15:41.198 { 00:15:41.198 "job": "nvme0n1", 00:15:41.198 "core_mask": "0x2", 00:15:41.198 "workload": "verify", 00:15:41.198 "status": "finished", 00:15:41.198 "verify_range": { 00:15:41.198 "start": 0, 00:15:41.198 "length": 8192 00:15:41.198 }, 00:15:41.198 "queue_depth": 128, 00:15:41.198 "io_size": 4096, 00:15:41.198 "runtime": 1.016523, 00:15:41.198 "iops": 4314.708078420262, 00:15:41.198 "mibps": 16.854328431329147, 00:15:41.198 "io_failed": 0, 00:15:41.198 "io_timeout": 0, 00:15:41.198 "avg_latency_us": 29368.630727521453, 00:15:41.198 "min_latency_us": 5689.716363636364, 00:15:41.198 "max_latency_us": 24784.523636363636 00:15:41.198 } 00:15:41.198 ], 00:15:41.198 "core_count": 1 00:15:41.198 } 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:41.198 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:41.198 nvmf_trace.0 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72693 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72693 ']' 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72693 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72693 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:41.455 killing process with pid 72693 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72693' 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72693 00:15:41.455 Received shutdown signal, test time was about 1.000000 seconds 00:15:41.455 00:15:41.455 Latency(us) 00:15:41.455 [2024-11-22T14:53:56.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.455 [2024-11-22T14:53:56.120Z] =================================================================================================================== 00:15:41.455 [2024-11-22T14:53:56.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.455 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72693 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.714 rmmod nvme_tcp 00:15:41.714 rmmod nvme_fabrics 00:15:41.714 rmmod nvme_keyring 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72661 ']' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72661 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72661 ']' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72661 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72661 00:15:41.714 killing process with pid 72661 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72661' 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72661 00:15:41.714 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72661 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.972 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.mPftIuV1Fx /tmp/tmp.DLZOFJiPlx /tmp/tmp.21R9er9XYg 00:15:42.230 00:15:42.230 real 1m27.056s 00:15:42.230 user 2m20.162s 00:15:42.230 sys 0m28.438s 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.230 ************************************ 00:15:42.230 END TEST nvmf_tls 00:15:42.230 ************************************ 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.230 ************************************ 00:15:42.230 START TEST nvmf_fips 00:15:42.230 ************************************ 00:15:42.230 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:42.490 * Looking for test storage... 00:15:42.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:42.490 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.490 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.490 14:53:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.490 --rc genhtml_branch_coverage=1 00:15:42.490 --rc genhtml_function_coverage=1 00:15:42.490 --rc genhtml_legend=1 00:15:42.490 --rc geninfo_all_blocks=1 00:15:42.490 --rc geninfo_unexecuted_blocks=1 00:15:42.490 00:15:42.490 ' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.490 --rc genhtml_branch_coverage=1 00:15:42.490 --rc genhtml_function_coverage=1 00:15:42.490 --rc genhtml_legend=1 00:15:42.490 --rc geninfo_all_blocks=1 00:15:42.490 --rc geninfo_unexecuted_blocks=1 00:15:42.490 00:15:42.490 ' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.490 --rc genhtml_branch_coverage=1 00:15:42.490 --rc genhtml_function_coverage=1 00:15:42.490 --rc genhtml_legend=1 00:15:42.490 --rc geninfo_all_blocks=1 00:15:42.490 --rc geninfo_unexecuted_blocks=1 00:15:42.490 00:15:42.490 ' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.490 --rc genhtml_branch_coverage=1 00:15:42.490 --rc genhtml_function_coverage=1 00:15:42.490 --rc genhtml_legend=1 00:15:42.490 --rc geninfo_all_blocks=1 00:15:42.490 --rc geninfo_unexecuted_blocks=1 00:15:42.490 00:15:42.490 ' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.490 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:42.491 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.750 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:42.751 Error setting digest 00:15:42.751 4082A13EAD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:42.751 4082A13EAD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:42.751 Cannot find device "nvmf_init_br" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:42.751 Cannot find device "nvmf_init_br2" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:42.751 Cannot find device "nvmf_tgt_br" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.751 Cannot find device "nvmf_tgt_br2" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:42.751 Cannot find device "nvmf_init_br" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:42.751 Cannot find device "nvmf_init_br2" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:42.751 Cannot find device "nvmf_tgt_br" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.751 Cannot find device "nvmf_tgt_br2" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.751 Cannot find device "nvmf_br" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.751 Cannot find device "nvmf_init_if" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.751 Cannot find device "nvmf_init_if2" 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.751 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.009 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:15:43.010 00:15:43.010 --- 10.0.0.3 ping statistics --- 00:15:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.010 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.010 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.010 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:15:43.010 00:15:43.010 --- 10.0.0.4 ping statistics --- 00:15:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.010 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:43.010 00:15:43.010 --- 10.0.0.1 ping statistics --- 00:15:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.010 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:43.010 00:15:43.010 --- 10.0.0.2 ping statistics --- 00:15:43.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.010 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73023 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73023 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73023 ']' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.010 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:43.268 [2024-11-22 14:53:57.722515] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:43.268 [2024-11-22 14:53:57.722606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.268 [2024-11-22 14:53:57.876523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.527 [2024-11-22 14:53:57.946874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.527 [2024-11-22 14:53:57.946942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.527 [2024-11-22 14:53:57.946957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.527 [2024-11-22 14:53:57.946967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.527 [2024-11-22 14:53:57.946976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.527 [2024-11-22 14:53:57.947526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.527 [2024-11-22 14:53:58.024777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.095 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.095 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:44.095 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.095 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.095 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.74M 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.74M 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.74M 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.74M 00:15:44.354 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.614 [2024-11-22 14:53:59.055241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.614 [2024-11-22 14:53:59.071184] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:44.614 [2024-11-22 14:53:59.071435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.614 malloc0 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73059 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73059 /var/tmp/bdevperf.sock 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73059 ']' 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.614 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:44.614 [2024-11-22 14:53:59.209559] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:44.614 [2024-11-22 14:53:59.209645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73059 ] 00:15:44.873 [2024-11-22 14:53:59.347249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.873 [2024-11-22 14:53:59.410636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.873 [2024-11-22 14:53:59.484795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.809 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.809 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:45.809 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.74M 00:15:45.809 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:46.067 [2024-11-22 14:54:00.597582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:46.067 TLSTESTn1 00:15:46.067 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:46.324 Running I/O for 10 seconds... 00:15:48.185 4648.00 IOPS, 18.16 MiB/s [2024-11-22T14:54:04.228Z] 4785.50 IOPS, 18.69 MiB/s [2024-11-22T14:54:05.165Z] 4588.67 IOPS, 17.92 MiB/s [2024-11-22T14:54:06.101Z] 4550.00 IOPS, 17.77 MiB/s [2024-11-22T14:54:07.037Z] 4480.60 IOPS, 17.50 MiB/s [2024-11-22T14:54:07.996Z] 4435.67 IOPS, 17.33 MiB/s [2024-11-22T14:54:08.932Z] 4483.14 IOPS, 17.51 MiB/s [2024-11-22T14:54:09.870Z] 4547.00 IOPS, 17.76 MiB/s [2024-11-22T14:54:11.248Z] 4583.67 IOPS, 17.90 MiB/s [2024-11-22T14:54:11.248Z] 4616.10 IOPS, 18.03 MiB/s 00:15:56.583 Latency(us) 00:15:56.583 [2024-11-22T14:54:11.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.583 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:56.583 Verification LBA range: start 0x0 length 0x2000 00:15:56.583 TLSTESTn1 : 10.03 4616.71 18.03 0.00 0.00 27660.88 7745.16 28240.06 00:15:56.583 [2024-11-22T14:54:11.248Z] =================================================================================================================== 00:15:56.583 [2024-11-22T14:54:11.248Z] Total : 4616.71 18.03 0.00 0.00 27660.88 7745.16 28240.06 00:15:56.583 { 00:15:56.583 "results": [ 00:15:56.583 { 00:15:56.583 "job": "TLSTESTn1", 00:15:56.583 "core_mask": "0x4", 00:15:56.583 "workload": "verify", 00:15:56.583 "status": "finished", 00:15:56.583 "verify_range": { 00:15:56.583 "start": 0, 00:15:56.583 "length": 8192 00:15:56.583 }, 00:15:56.583 "queue_depth": 128, 00:15:56.583 "io_size": 4096, 00:15:56.583 "runtime": 10.026412, 00:15:56.583 "iops": 4616.706355174712, 00:15:56.583 "mibps": 18.03400919990122, 00:15:56.583 "io_failed": 0, 00:15:56.583 "io_timeout": 0, 00:15:56.583 "avg_latency_us": 27660.877914289475, 00:15:56.583 "min_latency_us": 7745.163636363636, 00:15:56.583 "max_latency_us": 28240.05818181818 00:15:56.583 } 00:15:56.583 ], 00:15:56.583 "core_count": 1 00:15:56.583 } 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:56.583 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:56.583 nvmf_trace.0 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73059 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73059 ']' 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73059 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.584 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73059 00:15:56.584 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:56.584 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:56.584 killing process with pid 73059 00:15:56.584 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73059' 00:15:56.584 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73059 00:15:56.584 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.584 00:15:56.584 Latency(us) 00:15:56.584 [2024-11-22T14:54:11.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.584 [2024-11-22T14:54:11.249Z] =================================================================================================================== 00:15:56.584 [2024-11-22T14:54:11.249Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.584 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73059 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.842 rmmod nvme_tcp 00:15:56.842 rmmod nvme_fabrics 00:15:56.842 rmmod nvme_keyring 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73023 ']' 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73023 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73023 ']' 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73023 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73023 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73023' 00:15:56.842 killing process with pid 73023 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73023 00:15:56.842 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73023 00:15:57.100 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.101 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.74M 00:15:57.359 00:15:57.359 real 0m15.040s 00:15:57.359 user 0m20.458s 00:15:57.359 sys 0m6.132s 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:57.359 ************************************ 00:15:57.359 END TEST nvmf_fips 00:15:57.359 ************************************ 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.359 ************************************ 00:15:57.359 START TEST nvmf_control_msg_list 00:15:57.359 ************************************ 00:15:57.359 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:57.618 * Looking for test storage... 00:15:57.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:57.618 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.619 --rc genhtml_branch_coverage=1 00:15:57.619 --rc genhtml_function_coverage=1 00:15:57.619 --rc genhtml_legend=1 00:15:57.619 --rc geninfo_all_blocks=1 00:15:57.619 --rc geninfo_unexecuted_blocks=1 00:15:57.619 00:15:57.619 ' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.619 --rc genhtml_branch_coverage=1 00:15:57.619 --rc genhtml_function_coverage=1 00:15:57.619 --rc genhtml_legend=1 00:15:57.619 --rc geninfo_all_blocks=1 00:15:57.619 --rc geninfo_unexecuted_blocks=1 00:15:57.619 00:15:57.619 ' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.619 --rc genhtml_branch_coverage=1 00:15:57.619 --rc genhtml_function_coverage=1 00:15:57.619 --rc genhtml_legend=1 00:15:57.619 --rc geninfo_all_blocks=1 00:15:57.619 --rc geninfo_unexecuted_blocks=1 00:15:57.619 00:15:57.619 ' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.619 --rc genhtml_branch_coverage=1 00:15:57.619 --rc genhtml_function_coverage=1 00:15:57.619 --rc genhtml_legend=1 00:15:57.619 --rc geninfo_all_blocks=1 00:15:57.619 --rc geninfo_unexecuted_blocks=1 00:15:57.619 00:15:57.619 ' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.619 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.619 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.620 Cannot find device "nvmf_init_br" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.620 Cannot find device "nvmf_init_br2" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.620 Cannot find device "nvmf_tgt_br" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.620 Cannot find device "nvmf_tgt_br2" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.620 Cannot find device "nvmf_init_br" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.620 Cannot find device "nvmf_init_br2" 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:57.620 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.878 Cannot find device "nvmf_tgt_br" 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.878 Cannot find device "nvmf_tgt_br2" 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.878 Cannot find device "nvmf_br" 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.878 Cannot find device "nvmf_init_if" 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.878 Cannot find device "nvmf_init_if2" 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.878 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:58.137 00:15:58.137 --- 10.0.0.3 ping statistics --- 00:15:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.137 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.137 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.137 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:58.137 00:15:58.137 --- 10.0.0.4 ping statistics --- 00:15:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.137 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:15:58.137 00:15:58.137 --- 10.0.0.1 ping statistics --- 00:15:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.137 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:58.137 00:15:58.137 --- 10.0.0.2 ping statistics --- 00:15:58.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.137 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.137 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73458 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73458 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73458 ']' 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.138 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:58.138 [2024-11-22 14:54:12.665797] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:15:58.138 [2024-11-22 14:54:12.665894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.396 [2024-11-22 14:54:12.820359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.396 [2024-11-22 14:54:12.873576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.396 [2024-11-22 14:54:12.873645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.396 [2024-11-22 14:54:12.873659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.396 [2024-11-22 14:54:12.873670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.396 [2024-11-22 14:54:12.873679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.396 [2024-11-22 14:54:12.874127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.396 [2024-11-22 14:54:12.933648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.963 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.963 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:58.963 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.963 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.963 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 [2024-11-22 14:54:13.665022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 Malloc0 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:59.222 [2024-11-22 14:54:13.703724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73490 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73491 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73492 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.222 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73490 00:15:59.222 [2024-11-22 14:54:13.882094] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:59.481 [2024-11-22 14:54:13.902161] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:59.481 [2024-11-22 14:54:13.902358] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:00.417 Initializing NVMe Controllers 00:16:00.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:00.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:00.417 Initialization complete. Launching workers. 00:16:00.417 ======================================================== 00:16:00.417 Latency(us) 00:16:00.417 Device Information : IOPS MiB/s Average min max 00:16:00.417 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3510.89 13.71 284.51 122.20 2161.41 00:16:00.417 ======================================================== 00:16:00.417 Total : 3510.89 13.71 284.51 122.20 2161.41 00:16:00.417 00:16:00.417 Initializing NVMe Controllers 00:16:00.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:00.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:00.417 Initialization complete. Launching workers. 00:16:00.417 ======================================================== 00:16:00.417 Latency(us) 00:16:00.417 Device Information : IOPS MiB/s Average min max 00:16:00.417 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3518.00 13.74 283.93 156.85 465.56 00:16:00.417 ======================================================== 00:16:00.417 Total : 3518.00 13.74 283.93 156.85 465.56 00:16:00.417 00:16:00.417 Initializing NVMe Controllers 00:16:00.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:00.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:00.417 Initialization complete. Launching workers. 00:16:00.417 ======================================================== 00:16:00.417 Latency(us) 00:16:00.417 Device Information : IOPS MiB/s Average min max 00:16:00.417 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3501.98 13.68 285.24 163.47 2324.53 00:16:00.417 ======================================================== 00:16:00.417 Total : 3501.98 13.68 285.24 163.47 2324.53 00:16:00.417 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73491 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73492 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.417 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.417 rmmod nvme_tcp 00:16:00.417 rmmod nvme_fabrics 00:16:00.417 rmmod nvme_keyring 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73458 ']' 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73458 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73458 ']' 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73458 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.417 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73458 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.676 killing process with pid 73458 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73458' 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73458 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73458 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:00.676 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:00.936 00:16:00.936 real 0m3.561s 00:16:00.936 user 0m5.495s 00:16:00.936 sys 0m1.461s 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:00.936 ************************************ 00:16:00.936 END TEST nvmf_control_msg_list 00:16:00.936 ************************************ 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.936 ************************************ 00:16:00.936 START TEST nvmf_wait_for_buf 00:16:00.936 ************************************ 00:16:00.936 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:01.196 * Looking for test storage... 00:16:01.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.196 --rc genhtml_branch_coverage=1 00:16:01.196 --rc genhtml_function_coverage=1 00:16:01.196 --rc genhtml_legend=1 00:16:01.196 --rc geninfo_all_blocks=1 00:16:01.196 --rc geninfo_unexecuted_blocks=1 00:16:01.196 00:16:01.196 ' 00:16:01.196 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:01.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.196 --rc genhtml_branch_coverage=1 00:16:01.196 --rc genhtml_function_coverage=1 00:16:01.196 --rc genhtml_legend=1 00:16:01.196 --rc geninfo_all_blocks=1 00:16:01.197 --rc geninfo_unexecuted_blocks=1 00:16:01.197 00:16:01.197 ' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:01.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.197 --rc genhtml_branch_coverage=1 00:16:01.197 --rc genhtml_function_coverage=1 00:16:01.197 --rc genhtml_legend=1 00:16:01.197 --rc geninfo_all_blocks=1 00:16:01.197 --rc geninfo_unexecuted_blocks=1 00:16:01.197 00:16:01.197 ' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:01.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.197 --rc genhtml_branch_coverage=1 00:16:01.197 --rc genhtml_function_coverage=1 00:16:01.197 --rc genhtml_legend=1 00:16:01.197 --rc geninfo_all_blocks=1 00:16:01.197 --rc geninfo_unexecuted_blocks=1 00:16:01.197 00:16:01.197 ' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:01.197 Cannot find device "nvmf_init_br" 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:01.197 Cannot find device "nvmf_init_br2" 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:01.197 Cannot find device "nvmf_tgt_br" 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:01.197 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.456 Cannot find device "nvmf_tgt_br2" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:01.456 Cannot find device "nvmf_init_br" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:01.456 Cannot find device "nvmf_init_br2" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:01.456 Cannot find device "nvmf_tgt_br" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:01.456 Cannot find device "nvmf_tgt_br2" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:01.456 Cannot find device "nvmf_br" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:01.456 Cannot find device "nvmf_init_if" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:01.456 Cannot find device "nvmf_init_if2" 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.456 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.456 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.457 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.457 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.457 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.457 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.457 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:01.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:01.716 00:16:01.716 --- 10.0.0.3 ping statistics --- 00:16:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.716 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:01.716 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:01.716 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:16:01.716 00:16:01.716 --- 10.0.0.4 ping statistics --- 00:16:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.716 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:16:01.716 00:16:01.716 --- 10.0.0.1 ping statistics --- 00:16:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.716 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:01.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:01.716 00:16:01.716 --- 10.0.0.2 ping statistics --- 00:16:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.716 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73727 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73727 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73727 ']' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.716 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.716 [2024-11-22 14:54:16.308916] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:01.716 [2024-11-22 14:54:16.308997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.975 [2024-11-22 14:54:16.454159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.975 [2024-11-22 14:54:16.513400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.975 [2024-11-22 14:54:16.513739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.975 [2024-11-22 14:54:16.513884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.975 [2024-11-22 14:54:16.513910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.975 [2024-11-22 14:54:16.513923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.975 [2024-11-22 14:54:16.514386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.975 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 [2024-11-22 14:54:16.684406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 Malloc0 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 [2024-11-22 14:54:16.768069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 [2024-11-22 14:54:16.796157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:02.493 [2024-11-22 14:54:16.991522] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:03.870 Initializing NVMe Controllers 00:16:03.870 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:03.870 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:03.870 Initialization complete. Launching workers. 00:16:03.870 ======================================================== 00:16:03.870 Latency(us) 00:16:03.870 Device Information : IOPS MiB/s Average min max 00:16:03.870 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 527.98 66.00 7576.21 2063.45 9065.55 00:16:03.870 ======================================================== 00:16:03.870 Total : 527.98 66.00 7576.21 2063.45 9065.55 00:16:03.870 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=5016 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 5016 -eq 0 ]] 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:03.870 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.871 rmmod nvme_tcp 00:16:03.871 rmmod nvme_fabrics 00:16:03.871 rmmod nvme_keyring 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73727 ']' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73727 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73727 ']' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73727 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73727 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.871 killing process with pid 73727 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73727' 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73727 00:16:03.871 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73727 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:04.130 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.389 14:54:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.389 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:04.389 00:16:04.389 real 0m3.431s 00:16:04.389 user 0m2.698s 00:16:04.389 sys 0m0.867s 00:16:04.389 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.389 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:04.389 ************************************ 00:16:04.389 END TEST nvmf_wait_for_buf 00:16:04.389 ************************************ 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.650 ************************************ 00:16:04.650 START TEST nvmf_nsid 00:16:04.650 ************************************ 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:04.650 * Looking for test storage... 00:16:04.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.650 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.651 --rc genhtml_branch_coverage=1 00:16:04.651 --rc genhtml_function_coverage=1 00:16:04.651 --rc genhtml_legend=1 00:16:04.651 --rc geninfo_all_blocks=1 00:16:04.651 --rc geninfo_unexecuted_blocks=1 00:16:04.651 00:16:04.651 ' 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.651 --rc genhtml_branch_coverage=1 00:16:04.651 --rc genhtml_function_coverage=1 00:16:04.651 --rc genhtml_legend=1 00:16:04.651 --rc geninfo_all_blocks=1 00:16:04.651 --rc geninfo_unexecuted_blocks=1 00:16:04.651 00:16:04.651 ' 00:16:04.651 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:04.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.652 --rc genhtml_branch_coverage=1 00:16:04.652 --rc genhtml_function_coverage=1 00:16:04.652 --rc genhtml_legend=1 00:16:04.652 --rc geninfo_all_blocks=1 00:16:04.652 --rc geninfo_unexecuted_blocks=1 00:16:04.652 00:16:04.652 ' 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:04.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.652 --rc genhtml_branch_coverage=1 00:16:04.652 --rc genhtml_function_coverage=1 00:16:04.652 --rc genhtml_legend=1 00:16:04.652 --rc geninfo_all_blocks=1 00:16:04.652 --rc geninfo_unexecuted_blocks=1 00:16:04.652 00:16:04.652 ' 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.652 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.653 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.654 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:04.654 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:04.655 Cannot find device "nvmf_init_br" 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:04.655 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:04.915 Cannot find device "nvmf_init_br2" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:04.915 Cannot find device "nvmf_tgt_br" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.915 Cannot find device "nvmf_tgt_br2" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:04.915 Cannot find device "nvmf_init_br" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:04.915 Cannot find device "nvmf_init_br2" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:04.915 Cannot find device "nvmf_tgt_br" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:04.915 Cannot find device "nvmf_tgt_br2" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:04.915 Cannot find device "nvmf_br" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:04.915 Cannot find device "nvmf_init_if" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:04.915 Cannot find device "nvmf_init_if2" 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:04.915 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:05.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:05.174 00:16:05.174 --- 10.0.0.3 ping statistics --- 00:16:05.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.174 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:05.174 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:05.174 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:05.174 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:05.174 00:16:05.174 --- 10.0.0.4 ping statistics --- 00:16:05.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.175 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:05.175 00:16:05.175 --- 10.0.0.1 ping statistics --- 00:16:05.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.175 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:05.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:05.175 00:16:05.175 --- 10.0.0.2 ping statistics --- 00:16:05.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.175 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73985 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73985 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73985 ']' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.175 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:05.175 [2024-11-22 14:54:19.804006] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:05.175 [2024-11-22 14:54:19.804092] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.434 [2024-11-22 14:54:19.958210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.434 [2024-11-22 14:54:20.028100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.434 [2024-11-22 14:54:20.028178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.434 [2024-11-22 14:54:20.028194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.434 [2024-11-22 14:54:20.028205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.434 [2024-11-22 14:54:20.028214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.434 [2024-11-22 14:54:20.028786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.693 [2024-11-22 14:54:20.108705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74014 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:05.693 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5c3da6bb-4d4b-49ff-9488-f90e4613bb46 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=df5ba61d-4c5a-4e15-bdce-4730de60dafb 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ca4f5bcb-2f78-43dc-b0b3-66c7b432667b 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.694 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:05.694 null0 00:16:05.694 null1 00:16:05.694 null2 00:16:05.694 [2024-11-22 14:54:20.305093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.694 [2024-11-22 14:54:20.319031] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:05.694 [2024-11-22 14:54:20.319119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74014 ] 00:16:05.694 [2024-11-22 14:54:20.329240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:05.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74014 /var/tmp/tgt2.sock 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74014 ']' 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.953 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:05.953 [2024-11-22 14:54:20.475739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.953 [2024-11-22 14:54:20.544098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.213 [2024-11-22 14:54:20.646991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.472 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.472 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:06.472 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:06.730 [2024-11-22 14:54:21.351726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.730 [2024-11-22 14:54:21.367828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:06.989 nvme0n1 nvme0n2 00:16:06.989 nvme1n1 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:06.989 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:07.971 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:07.971 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:07.971 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:07.971 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:07.971 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5c3da6bb-4d4b-49ff-9488-f90e4613bb46 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5c3da6bb4d4b49ff9488f90e4613bb46 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5C3DA6BB4D4B49FF9488F90E4613BB46 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5C3DA6BB4D4B49FF9488F90E4613BB46 == \5\C\3\D\A\6\B\B\4\D\4\B\4\9\F\F\9\4\8\8\F\9\0\E\4\6\1\3\B\B\4\6 ]] 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid df5ba61d-4c5a-4e15-bdce-4730de60dafb 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=df5ba61d4c5a4e15bdce4730de60dafb 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DF5BA61D4C5A4E15BDCE4730DE60DAFB 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DF5BA61D4C5A4E15BDCE4730DE60DAFB == \D\F\5\B\A\6\1\D\4\C\5\A\4\E\1\5\B\D\C\E\4\7\3\0\D\E\6\0\D\A\F\B ]] 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ca4f5bcb-2f78-43dc-b0b3-66c7b432667b 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ca4f5bcb2f7843dcb0b366c7b432667b 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CA4F5BCB2F7843DCB0B366C7B432667B 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ CA4F5BCB2F7843DCB0B366C7B432667B == \C\A\4\F\5\B\C\B\2\F\7\8\4\3\D\C\B\0\B\3\6\6\C\7\B\4\3\2\6\6\7\B ]] 00:16:08.253 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74014 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74014 ']' 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74014 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74014 00:16:08.513 killing process with pid 74014 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74014' 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74014 00:16:08.513 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74014 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:09.081 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:09.081 rmmod nvme_tcp 00:16:09.340 rmmod nvme_fabrics 00:16:09.340 rmmod nvme_keyring 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73985 ']' 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73985 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73985 ']' 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73985 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73985 00:16:09.340 killing process with pid 73985 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73985' 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73985 00:16:09.340 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73985 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:09.599 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:09.858 00:16:09.858 real 0m5.242s 00:16:09.858 user 0m7.631s 00:16:09.858 sys 0m1.959s 00:16:09.858 ************************************ 00:16:09.858 END TEST nvmf_nsid 00:16:09.858 ************************************ 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:09.858 ************************************ 00:16:09.858 END TEST nvmf_target_extra 00:16:09.858 ************************************ 00:16:09.858 00:16:09.858 real 4m58.948s 00:16:09.858 user 10m18.305s 00:16:09.858 sys 1m10.192s 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.858 14:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.858 14:54:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:09.858 14:54:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.858 14:54:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.858 14:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.858 ************************************ 00:16:09.858 START TEST nvmf_host 00:16:09.858 ************************************ 00:16:09.858 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:09.858 * Looking for test storage... 00:16:09.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:09.858 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:09.858 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.858 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.118 --rc genhtml_branch_coverage=1 00:16:10.118 --rc genhtml_function_coverage=1 00:16:10.118 --rc genhtml_legend=1 00:16:10.118 --rc geninfo_all_blocks=1 00:16:10.118 --rc geninfo_unexecuted_blocks=1 00:16:10.118 00:16:10.118 ' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.118 --rc genhtml_branch_coverage=1 00:16:10.118 --rc genhtml_function_coverage=1 00:16:10.118 --rc genhtml_legend=1 00:16:10.118 --rc geninfo_all_blocks=1 00:16:10.118 --rc geninfo_unexecuted_blocks=1 00:16:10.118 00:16:10.118 ' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.118 --rc genhtml_branch_coverage=1 00:16:10.118 --rc genhtml_function_coverage=1 00:16:10.118 --rc genhtml_legend=1 00:16:10.118 --rc geninfo_all_blocks=1 00:16:10.118 --rc geninfo_unexecuted_blocks=1 00:16:10.118 00:16:10.118 ' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.118 --rc genhtml_branch_coverage=1 00:16:10.118 --rc genhtml_function_coverage=1 00:16:10.118 --rc genhtml_legend=1 00:16:10.118 --rc geninfo_all_blocks=1 00:16:10.118 --rc geninfo_unexecuted_blocks=1 00:16:10.118 00:16:10.118 ' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.118 14:54:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:10.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.119 ************************************ 00:16:10.119 START TEST nvmf_identify 00:16:10.119 ************************************ 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:10.119 * Looking for test storage... 00:16:10.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:10.119 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.379 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:10.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.380 --rc genhtml_branch_coverage=1 00:16:10.380 --rc genhtml_function_coverage=1 00:16:10.380 --rc genhtml_legend=1 00:16:10.380 --rc geninfo_all_blocks=1 00:16:10.380 --rc geninfo_unexecuted_blocks=1 00:16:10.380 00:16:10.380 ' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:10.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.380 --rc genhtml_branch_coverage=1 00:16:10.380 --rc genhtml_function_coverage=1 00:16:10.380 --rc genhtml_legend=1 00:16:10.380 --rc geninfo_all_blocks=1 00:16:10.380 --rc geninfo_unexecuted_blocks=1 00:16:10.380 00:16:10.380 ' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:10.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.380 --rc genhtml_branch_coverage=1 00:16:10.380 --rc genhtml_function_coverage=1 00:16:10.380 --rc genhtml_legend=1 00:16:10.380 --rc geninfo_all_blocks=1 00:16:10.380 --rc geninfo_unexecuted_blocks=1 00:16:10.380 00:16:10.380 ' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:10.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.380 --rc genhtml_branch_coverage=1 00:16:10.380 --rc genhtml_function_coverage=1 00:16:10.380 --rc genhtml_legend=1 00:16:10.380 --rc geninfo_all_blocks=1 00:16:10.380 --rc geninfo_unexecuted_blocks=1 00:16:10.380 00:16:10.380 ' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:10.380 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:10.380 Cannot find device "nvmf_init_br" 00:16:10.380 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:10.381 Cannot find device "nvmf_init_br2" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:10.381 Cannot find device "nvmf_tgt_br" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.381 Cannot find device "nvmf_tgt_br2" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:10.381 Cannot find device "nvmf_init_br" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:10.381 Cannot find device "nvmf_init_br2" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:10.381 Cannot find device "nvmf_tgt_br" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:10.381 Cannot find device "nvmf_tgt_br2" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:10.381 Cannot find device "nvmf_br" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:10.381 Cannot find device "nvmf_init_if" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:10.381 Cannot find device "nvmf_init_if2" 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:10.381 14:54:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.381 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.381 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:10.381 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.381 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:10.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:10.640 00:16:10.640 --- 10.0.0.3 ping statistics --- 00:16:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.640 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:10.640 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:10.640 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:16:10.640 00:16:10.640 --- 10.0.0.4 ping statistics --- 00:16:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.640 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:10.640 00:16:10.640 --- 10.0.0.1 ping statistics --- 00:16:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.640 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:10.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:10.640 00:16:10.640 --- 10.0.0.2 ping statistics --- 00:16:10.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.640 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74375 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74375 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74375 ']' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.640 14:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:10.900 [2024-11-22 14:54:25.356515] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:10.900 [2024-11-22 14:54:25.356606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.900 [2024-11-22 14:54:25.512181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.158 [2024-11-22 14:54:25.578069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.158 [2024-11-22 14:54:25.578429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.158 [2024-11-22 14:54:25.578456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.158 [2024-11-22 14:54:25.578467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.159 [2024-11-22 14:54:25.578477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.159 [2024-11-22 14:54:25.580021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.159 [2024-11-22 14:54:25.580162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.159 [2024-11-22 14:54:25.580331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.159 [2024-11-22 14:54:25.580334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.159 [2024-11-22 14:54:25.657757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 [2024-11-22 14:54:26.336520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.727 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 Malloc0 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 [2024-11-22 14:54:26.452265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:11.987 [ 00:16:11.987 { 00:16:11.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:11.987 "subtype": "Discovery", 00:16:11.987 "listen_addresses": [ 00:16:11.987 { 00:16:11.987 "trtype": "TCP", 00:16:11.987 "adrfam": "IPv4", 00:16:11.987 "traddr": "10.0.0.3", 00:16:11.987 "trsvcid": "4420" 00:16:11.987 } 00:16:11.987 ], 00:16:11.987 "allow_any_host": true, 00:16:11.987 "hosts": [] 00:16:11.987 }, 00:16:11.987 { 00:16:11.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.987 "subtype": "NVMe", 00:16:11.987 "listen_addresses": [ 00:16:11.987 { 00:16:11.987 "trtype": "TCP", 00:16:11.987 "adrfam": "IPv4", 00:16:11.987 "traddr": "10.0.0.3", 00:16:11.987 "trsvcid": "4420" 00:16:11.987 } 00:16:11.987 ], 00:16:11.987 "allow_any_host": true, 00:16:11.987 "hosts": [], 00:16:11.987 "serial_number": "SPDK00000000000001", 00:16:11.987 "model_number": "SPDK bdev Controller", 00:16:11.987 "max_namespaces": 32, 00:16:11.987 "min_cntlid": 1, 00:16:11.987 "max_cntlid": 65519, 00:16:11.987 "namespaces": [ 00:16:11.987 { 00:16:11.987 "nsid": 1, 00:16:11.987 "bdev_name": "Malloc0", 00:16:11.987 "name": "Malloc0", 00:16:11.987 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:11.987 "eui64": "ABCDEF0123456789", 00:16:11.987 "uuid": "38754a70-6ce6-4212-a24b-65f87dc07f77" 00:16:11.987 } 00:16:11.987 ] 00:16:11.987 } 00:16:11.987 ] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.987 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:11.987 [2024-11-22 14:54:26.506346] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:11.987 [2024-11-22 14:54:26.506421] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74410 ] 00:16:12.249 [2024-11-22 14:54:26.662408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:12.249 [2024-11-22 14:54:26.662495] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:12.249 [2024-11-22 14:54:26.662503] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:12.249 [2024-11-22 14:54:26.662521] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:12.249 [2024-11-22 14:54:26.662534] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:12.249 [2024-11-22 14:54:26.662908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:12.249 [2024-11-22 14:54:26.662988] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x68f750 0 00:16:12.249 [2024-11-22 14:54:26.668416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:12.249 [2024-11-22 14:54:26.668442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:12.249 [2024-11-22 14:54:26.668465] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:12.249 [2024-11-22 14:54:26.668469] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:12.249 [2024-11-22 14:54:26.668504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.668512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.668516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.249 [2024-11-22 14:54:26.668533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:12.249 [2024-11-22 14:54:26.668565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.249 [2024-11-22 14:54:26.676400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.249 [2024-11-22 14:54:26.676424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.249 [2024-11-22 14:54:26.676445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.249 [2024-11-22 14:54:26.676462] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:12.249 [2024-11-22 14:54:26.676470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:12.249 [2024-11-22 14:54:26.676477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:12.249 [2024-11-22 14:54:26.676495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.249 [2024-11-22 14:54:26.676514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.249 [2024-11-22 14:54:26.676540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.249 [2024-11-22 14:54:26.676597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.249 [2024-11-22 14:54:26.676604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.249 [2024-11-22 14:54:26.676608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.249 [2024-11-22 14:54:26.676618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:12.249 [2024-11-22 14:54:26.676625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:12.249 [2024-11-22 14:54:26.676633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.249 [2024-11-22 14:54:26.676649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.249 [2024-11-22 14:54:26.676682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.249 [2024-11-22 14:54:26.676740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.249 [2024-11-22 14:54:26.676747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.249 [2024-11-22 14:54:26.676751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.249 [2024-11-22 14:54:26.676762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:12.249 [2024-11-22 14:54:26.676771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.249 [2024-11-22 14:54:26.676779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.249 [2024-11-22 14:54:26.676794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.249 [2024-11-22 14:54:26.676812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.249 [2024-11-22 14:54:26.676853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.249 [2024-11-22 14:54:26.676860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.249 [2024-11-22 14:54:26.676864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.249 [2024-11-22 14:54:26.676875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.249 [2024-11-22 14:54:26.676885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.249 [2024-11-22 14:54:26.676901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.249 [2024-11-22 14:54:26.676918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.249 [2024-11-22 14:54:26.676958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.249 [2024-11-22 14:54:26.676965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.249 [2024-11-22 14:54:26.676969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.249 [2024-11-22 14:54:26.676973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.249 [2024-11-22 14:54:26.676978] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:12.249 [2024-11-22 14:54:26.676983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:12.249 [2024-11-22 14:54:26.676992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.250 [2024-11-22 14:54:26.677103] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:12.250 [2024-11-22 14:54:26.677110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.250 [2024-11-22 14:54:26.677121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.250 [2024-11-22 14:54:26.677155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.250 [2024-11-22 14:54:26.677198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.250 [2024-11-22 14:54:26.677205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.250 [2024-11-22 14:54:26.677209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.250 [2024-11-22 14:54:26.677218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.250 [2024-11-22 14:54:26.677229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.250 [2024-11-22 14:54:26.677261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.250 [2024-11-22 14:54:26.677306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.250 [2024-11-22 14:54:26.677313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.250 [2024-11-22 14:54:26.677317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.250 [2024-11-22 14:54:26.677326] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.250 [2024-11-22 14:54:26.677332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:12.250 [2024-11-22 14:54:26.677357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.250 [2024-11-22 14:54:26.677401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.250 [2024-11-22 14:54:26.677508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.250 [2024-11-22 14:54:26.677518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.250 [2024-11-22 14:54:26.677522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677527] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68f750): datao=0, datal=4096, cccid=0 00:16:12.250 [2024-11-22 14:54:26.677532] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6f3740) on tqpair(0x68f750): expected_datao=0, payload_size=4096 00:16:12.250 [2024-11-22 14:54:26.677537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677546] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.250 [2024-11-22 14:54:26.677566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.250 [2024-11-22 14:54:26.677570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.250 [2024-11-22 14:54:26.677584] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:12.250 [2024-11-22 14:54:26.677589] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:12.250 [2024-11-22 14:54:26.677594] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:12.250 [2024-11-22 14:54:26.677600] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:12.250 [2024-11-22 14:54:26.677606] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:12.250 [2024-11-22 14:54:26.677611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.250 [2024-11-22 14:54:26.677673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.250 [2024-11-22 14:54:26.677726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.250 [2024-11-22 14:54:26.677733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.250 [2024-11-22 14:54:26.677737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.250 [2024-11-22 14:54:26.677751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.250 [2024-11-22 14:54:26.677772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.250 [2024-11-22 14:54:26.677792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.250 [2024-11-22 14:54:26.677812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.250 [2024-11-22 14:54:26.677831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.250 [2024-11-22 14:54:26.677852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.677856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.677863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.250 [2024-11-22 14:54:26.677884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3740, cid 0, qid 0 00:16:12.250 [2024-11-22 14:54:26.677891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f38c0, cid 1, qid 0 00:16:12.250 [2024-11-22 14:54:26.677896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3a40, cid 2, qid 0 00:16:12.250 [2024-11-22 14:54:26.677901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.250 [2024-11-22 14:54:26.677906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3d40, cid 4, qid 0 00:16:12.250 [2024-11-22 14:54:26.677986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.250 [2024-11-22 14:54:26.677993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.250 [2024-11-22 14:54:26.677997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.678001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3d40) on tqpair=0x68f750 00:16:12.250 [2024-11-22 14:54:26.678007] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:12.250 [2024-11-22 14:54:26.678013] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:12.250 [2024-11-22 14:54:26.678025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.678029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68f750) 00:16:12.250 [2024-11-22 14:54:26.678037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.250 [2024-11-22 14:54:26.678055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3d40, cid 4, qid 0 00:16:12.250 [2024-11-22 14:54:26.678115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.250 [2024-11-22 14:54:26.678122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.250 [2024-11-22 14:54:26.678126] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.250 [2024-11-22 14:54:26.678130] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68f750): datao=0, datal=4096, cccid=4 00:16:12.250 [2024-11-22 14:54:26.678135] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6f3d40) on tqpair(0x68f750): expected_datao=0, payload_size=4096 00:16:12.251 [2024-11-22 14:54:26.678140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678147] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678151] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.251 [2024-11-22 14:54:26.678166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.251 [2024-11-22 14:54:26.678169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3d40) on tqpair=0x68f750 00:16:12.251 [2024-11-22 14:54:26.678188] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:12.251 [2024-11-22 14:54:26.678223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68f750) 00:16:12.251 [2024-11-22 14:54:26.678238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.251 [2024-11-22 14:54:26.678246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x68f750) 00:16:12.251 [2024-11-22 14:54:26.678260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.251 [2024-11-22 14:54:26.678288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3d40, cid 4, qid 0 00:16:12.251 [2024-11-22 14:54:26.678296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3ec0, cid 5, qid 0 00:16:12.251 [2024-11-22 14:54:26.678416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.251 [2024-11-22 14:54:26.678425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.251 [2024-11-22 14:54:26.678429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678433] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68f750): datao=0, datal=1024, cccid=4 00:16:12.251 [2024-11-22 14:54:26.678438] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6f3d40) on tqpair(0x68f750): expected_datao=0, payload_size=1024 00:16:12.251 [2024-11-22 14:54:26.678442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678449] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678453] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.251 [2024-11-22 14:54:26.678465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.251 [2024-11-22 14:54:26.678469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3ec0) on tqpair=0x68f750 00:16:12.251 [2024-11-22 14:54:26.678491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.251 [2024-11-22 14:54:26.678499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.251 [2024-11-22 14:54:26.678503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3d40) on tqpair=0x68f750 00:16:12.251 [2024-11-22 14:54:26.678520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68f750) 00:16:12.251 [2024-11-22 14:54:26.678533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.251 [2024-11-22 14:54:26.678558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3d40, cid 4, qid 0 00:16:12.251 [2024-11-22 14:54:26.678621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.251 [2024-11-22 14:54:26.678628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.251 [2024-11-22 14:54:26.678632] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678635] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68f750): datao=0, datal=3072, cccid=4 00:16:12.251 [2024-11-22 14:54:26.678640] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6f3d40) on tqpair(0x68f750): expected_datao=0, payload_size=3072 00:16:12.251 [2024-11-22 14:54:26.678645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678652] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678656] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.251 [2024-11-22 14:54:26.678670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.251 [2024-11-22 14:54:26.678674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3d40) on tqpair=0x68f750 00:16:12.251 [2024-11-22 14:54:26.678688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x68f750) 00:16:12.251 [2024-11-22 14:54:26.678706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.251 [2024-11-22 14:54:26.678730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3d40, cid 4, qid 0 00:16:12.251 [2024-11-22 14:54:26.678791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.251 [2024-11-22 14:54:26.678798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.251 [2024-11-22 14:54:26.678802] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678806] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x68f750): datao=0, datal=8, cccid=4 00:16:12.251 [2024-11-22 14:54:26.678810] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6f3d40) on tqpair(0x68f750): expected_datao=0, payload_size=8 00:16:12.251 [2024-11-22 14:54:26.678815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678822] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678826] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.251 [2024-11-22 14:54:26.678848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.251 [2024-11-22 14:54:26.678852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.251 [2024-11-22 14:54:26.678856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3d40) on tqpair=0x68f750 00:16:12.251 ===================================================== 00:16:12.251 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:12.251 ===================================================== 00:16:12.251 Controller Capabilities/Features 00:16:12.251 ================================ 00:16:12.251 Vendor ID: 0000 00:16:12.251 Subsystem Vendor ID: 0000 00:16:12.251 Serial Number: .................... 00:16:12.251 Model Number: ........................................ 00:16:12.251 Firmware Version: 25.01 00:16:12.251 Recommended Arb Burst: 0 00:16:12.251 IEEE OUI Identifier: 00 00 00 00:16:12.251 Multi-path I/O 00:16:12.251 May have multiple subsystem ports: No 00:16:12.251 May have multiple controllers: No 00:16:12.251 Associated with SR-IOV VF: No 00:16:12.251 Max Data Transfer Size: 131072 00:16:12.251 Max Number of Namespaces: 0 00:16:12.251 Max Number of I/O Queues: 1024 00:16:12.251 NVMe Specification Version (VS): 1.3 00:16:12.251 NVMe Specification Version (Identify): 1.3 00:16:12.251 Maximum Queue Entries: 128 00:16:12.251 Contiguous Queues Required: Yes 00:16:12.251 Arbitration Mechanisms Supported 00:16:12.251 Weighted Round Robin: Not Supported 00:16:12.251 Vendor Specific: Not Supported 00:16:12.251 Reset Timeout: 15000 ms 00:16:12.251 Doorbell Stride: 4 bytes 00:16:12.251 NVM Subsystem Reset: Not Supported 00:16:12.251 Command Sets Supported 00:16:12.251 NVM Command Set: Supported 00:16:12.251 Boot Partition: Not Supported 00:16:12.251 Memory Page Size Minimum: 4096 bytes 00:16:12.251 Memory Page Size Maximum: 4096 bytes 00:16:12.251 Persistent Memory Region: Not Supported 00:16:12.251 Optional Asynchronous Events Supported 00:16:12.251 Namespace Attribute Notices: Not Supported 00:16:12.251 Firmware Activation Notices: Not Supported 00:16:12.251 ANA Change Notices: Not Supported 00:16:12.251 PLE Aggregate Log Change Notices: Not Supported 00:16:12.251 LBA Status Info Alert Notices: Not Supported 00:16:12.251 EGE Aggregate Log Change Notices: Not Supported 00:16:12.251 Normal NVM Subsystem Shutdown event: Not Supported 00:16:12.251 Zone Descriptor Change Notices: Not Supported 00:16:12.251 Discovery Log Change Notices: Supported 00:16:12.251 Controller Attributes 00:16:12.251 128-bit Host Identifier: Not Supported 00:16:12.251 Non-Operational Permissive Mode: Not Supported 00:16:12.251 NVM Sets: Not Supported 00:16:12.251 Read Recovery Levels: Not Supported 00:16:12.251 Endurance Groups: Not Supported 00:16:12.251 Predictable Latency Mode: Not Supported 00:16:12.251 Traffic Based Keep ALive: Not Supported 00:16:12.251 Namespace Granularity: Not Supported 00:16:12.251 SQ Associations: Not Supported 00:16:12.251 UUID List: Not Supported 00:16:12.251 Multi-Domain Subsystem: Not Supported 00:16:12.251 Fixed Capacity Management: Not Supported 00:16:12.251 Variable Capacity Management: Not Supported 00:16:12.251 Delete Endurance Group: Not Supported 00:16:12.251 Delete NVM Set: Not Supported 00:16:12.251 Extended LBA Formats Supported: Not Supported 00:16:12.251 Flexible Data Placement Supported: Not Supported 00:16:12.251 00:16:12.251 Controller Memory Buffer Support 00:16:12.251 ================================ 00:16:12.251 Supported: No 00:16:12.251 00:16:12.251 Persistent Memory Region Support 00:16:12.251 ================================ 00:16:12.251 Supported: No 00:16:12.251 00:16:12.251 Admin Command Set Attributes 00:16:12.251 ============================ 00:16:12.251 Security Send/Receive: Not Supported 00:16:12.252 Format NVM: Not Supported 00:16:12.252 Firmware Activate/Download: Not Supported 00:16:12.252 Namespace Management: Not Supported 00:16:12.252 Device Self-Test: Not Supported 00:16:12.252 Directives: Not Supported 00:16:12.252 NVMe-MI: Not Supported 00:16:12.252 Virtualization Management: Not Supported 00:16:12.252 Doorbell Buffer Config: Not Supported 00:16:12.252 Get LBA Status Capability: Not Supported 00:16:12.252 Command & Feature Lockdown Capability: Not Supported 00:16:12.252 Abort Command Limit: 1 00:16:12.252 Async Event Request Limit: 4 00:16:12.252 Number of Firmware Slots: N/A 00:16:12.252 Firmware Slot 1 Read-Only: N/A 00:16:12.252 Firmware Activation Without Reset: N/A 00:16:12.252 Multiple Update Detection Support: N/A 00:16:12.252 Firmware Update Granularity: No Information Provided 00:16:12.252 Per-Namespace SMART Log: No 00:16:12.252 Asymmetric Namespace Access Log Page: Not Supported 00:16:12.252 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:12.252 Command Effects Log Page: Not Supported 00:16:12.252 Get Log Page Extended Data: Supported 00:16:12.252 Telemetry Log Pages: Not Supported 00:16:12.252 Persistent Event Log Pages: Not Supported 00:16:12.252 Supported Log Pages Log Page: May Support 00:16:12.252 Commands Supported & Effects Log Page: Not Supported 00:16:12.252 Feature Identifiers & Effects Log Page:May Support 00:16:12.252 NVMe-MI Commands & Effects Log Page: May Support 00:16:12.252 Data Area 4 for Telemetry Log: Not Supported 00:16:12.252 Error Log Page Entries Supported: 128 00:16:12.252 Keep Alive: Not Supported 00:16:12.252 00:16:12.252 NVM Command Set Attributes 00:16:12.252 ========================== 00:16:12.252 Submission Queue Entry Size 00:16:12.252 Max: 1 00:16:12.252 Min: 1 00:16:12.252 Completion Queue Entry Size 00:16:12.252 Max: 1 00:16:12.252 Min: 1 00:16:12.252 Number of Namespaces: 0 00:16:12.252 Compare Command: Not Supported 00:16:12.252 Write Uncorrectable Command: Not Supported 00:16:12.252 Dataset Management Command: Not Supported 00:16:12.252 Write Zeroes Command: Not Supported 00:16:12.252 Set Features Save Field: Not Supported 00:16:12.252 Reservations: Not Supported 00:16:12.252 Timestamp: Not Supported 00:16:12.252 Copy: Not Supported 00:16:12.252 Volatile Write Cache: Not Present 00:16:12.252 Atomic Write Unit (Normal): 1 00:16:12.252 Atomic Write Unit (PFail): 1 00:16:12.252 Atomic Compare & Write Unit: 1 00:16:12.252 Fused Compare & Write: Supported 00:16:12.252 Scatter-Gather List 00:16:12.252 SGL Command Set: Supported 00:16:12.252 SGL Keyed: Supported 00:16:12.252 SGL Bit Bucket Descriptor: Not Supported 00:16:12.252 SGL Metadata Pointer: Not Supported 00:16:12.252 Oversized SGL: Not Supported 00:16:12.252 SGL Metadata Address: Not Supported 00:16:12.252 SGL Offset: Supported 00:16:12.252 Transport SGL Data Block: Not Supported 00:16:12.252 Replay Protected Memory Block: Not Supported 00:16:12.252 00:16:12.252 Firmware Slot Information 00:16:12.252 ========================= 00:16:12.252 Active slot: 0 00:16:12.252 00:16:12.252 00:16:12.252 Error Log 00:16:12.252 ========= 00:16:12.252 00:16:12.252 Active Namespaces 00:16:12.252 ================= 00:16:12.252 Discovery Log Page 00:16:12.252 ================== 00:16:12.252 Generation Counter: 2 00:16:12.252 Number of Records: 2 00:16:12.252 Record Format: 0 00:16:12.252 00:16:12.252 Discovery Log Entry 0 00:16:12.252 ---------------------- 00:16:12.252 Transport Type: 3 (TCP) 00:16:12.252 Address Family: 1 (IPv4) 00:16:12.252 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:12.252 Entry Flags: 00:16:12.252 Duplicate Returned Information: 1 00:16:12.252 Explicit Persistent Connection Support for Discovery: 1 00:16:12.252 Transport Requirements: 00:16:12.252 Secure Channel: Not Required 00:16:12.252 Port ID: 0 (0x0000) 00:16:12.252 Controller ID: 65535 (0xffff) 00:16:12.252 Admin Max SQ Size: 128 00:16:12.252 Transport Service Identifier: 4420 00:16:12.252 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:12.252 Transport Address: 10.0.0.3 00:16:12.252 Discovery Log Entry 1 00:16:12.252 ---------------------- 00:16:12.252 Transport Type: 3 (TCP) 00:16:12.252 Address Family: 1 (IPv4) 00:16:12.252 Subsystem Type: 2 (NVM Subsystem) 00:16:12.252 Entry Flags: 00:16:12.252 Duplicate Returned Information: 0 00:16:12.252 Explicit Persistent Connection Support for Discovery: 0 00:16:12.252 Transport Requirements: 00:16:12.252 Secure Channel: Not Required 00:16:12.252 Port ID: 0 (0x0000) 00:16:12.252 Controller ID: 65535 (0xffff) 00:16:12.252 Admin Max SQ Size: 128 00:16:12.252 Transport Service Identifier: 4420 00:16:12.252 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:12.252 Transport Address: 10.0.0.3 [2024-11-22 14:54:26.678957] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:12.252 [2024-11-22 14:54:26.678970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3740) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.678978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.252 [2024-11-22 14:54:26.678983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f38c0) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.678988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.252 [2024-11-22 14:54:26.678993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3a40) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.678998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.252 [2024-11-22 14:54:26.679004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.679008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.252 [2024-11-22 14:54:26.679018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.252 [2024-11-22 14:54:26.679034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.252 [2024-11-22 14:54:26.679057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.252 [2024-11-22 14:54:26.679102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.252 [2024-11-22 14:54:26.679109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.252 [2024-11-22 14:54:26.679113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.679126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.252 [2024-11-22 14:54:26.679141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.252 [2024-11-22 14:54:26.679163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.252 [2024-11-22 14:54:26.679223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.252 [2024-11-22 14:54:26.679230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.252 [2024-11-22 14:54:26.679234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.679244] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:12.252 [2024-11-22 14:54:26.679249] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:12.252 [2024-11-22 14:54:26.679259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.252 [2024-11-22 14:54:26.679275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.252 [2024-11-22 14:54:26.679292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.252 [2024-11-22 14:54:26.679337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.252 [2024-11-22 14:54:26.679344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.252 [2024-11-22 14:54:26.679348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.252 [2024-11-22 14:54:26.679363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.252 [2024-11-22 14:54:26.679386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.679497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.679603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.679705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.679805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.679908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.679924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.679940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.679983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.679990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.679994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.679998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.680008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.680023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.680040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.680090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.680097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.680101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.680116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.680131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.680148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.680191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.680198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.680202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.680216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.680233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.680249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.680292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.680299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.680302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.680317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.680325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.680332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.680349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.253 [2024-11-22 14:54:26.684407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.253 [2024-11-22 14:54:26.684429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.253 [2024-11-22 14:54:26.684450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.684454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.253 [2024-11-22 14:54:26.684469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.684474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.253 [2024-11-22 14:54:26.684478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x68f750) 00:16:12.253 [2024-11-22 14:54:26.684486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.253 [2024-11-22 14:54:26.684510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6f3bc0, cid 3, qid 0 00:16:12.254 [2024-11-22 14:54:26.684563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.684570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.684574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.684578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6f3bc0) on tqpair=0x68f750 00:16:12.254 [2024-11-22 14:54:26.684586] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:16:12.254 00:16:12.254 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:12.254 [2024-11-22 14:54:26.731130] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:12.254 [2024-11-22 14:54:26.731185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74412 ] 00:16:12.254 [2024-11-22 14:54:26.885309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:12.254 [2024-11-22 14:54:26.889399] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:12.254 [2024-11-22 14:54:26.889419] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:12.254 [2024-11-22 14:54:26.889454] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:12.254 [2024-11-22 14:54:26.889465] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:12.254 [2024-11-22 14:54:26.889765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:12.254 [2024-11-22 14:54:26.889848] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdb1750 0 00:16:12.254 [2024-11-22 14:54:26.897393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:12.254 [2024-11-22 14:54:26.897419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:12.254 [2024-11-22 14:54:26.897441] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:12.254 [2024-11-22 14:54:26.897444] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:12.254 [2024-11-22 14:54:26.897475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.897482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.897486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.897499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:12.254 [2024-11-22 14:54:26.897530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.902394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.902416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.902437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.902452] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:12.254 [2024-11-22 14:54:26.902460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:12.254 [2024-11-22 14:54:26.902467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:12.254 [2024-11-22 14:54:26.902482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.902500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.902526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.902582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.902589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.902593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.902602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:12.254 [2024-11-22 14:54:26.902610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:12.254 [2024-11-22 14:54:26.902618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.902632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.902650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.902725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.902732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.902736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.902746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:12.254 [2024-11-22 14:54:26.902755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.902762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.902778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.902795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.902842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.902849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.902852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.902863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.902873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.902889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.902906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.902958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.902965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.902968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.902973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.902978] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:12.254 [2024-11-22 14:54:26.902983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.902991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.903103] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:12.254 [2024-11-22 14:54:26.903109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.903119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.903135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.903154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.903204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.903211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.903215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.254 [2024-11-22 14:54:26.903224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.254 [2024-11-22 14:54:26.903234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.254 [2024-11-22 14:54:26.903250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.254 [2024-11-22 14:54:26.903267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.254 [2024-11-22 14:54:26.903312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.254 [2024-11-22 14:54:26.903319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.254 [2024-11-22 14:54:26.903322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.254 [2024-11-22 14:54:26.903326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.903331] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.255 [2024-11-22 14:54:26.903337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:12.255 [2024-11-22 14:54:26.903360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.255 [2024-11-22 14:54:26.903429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.255 [2024-11-22 14:54:26.903528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.255 [2024-11-22 14:54:26.903535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.255 [2024-11-22 14:54:26.903539] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=4096, cccid=0 00:16:12.255 [2024-11-22 14:54:26.903548] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15740) on tqpair(0xdb1750): expected_datao=0, payload_size=4096 00:16:12.255 [2024-11-22 14:54:26.903553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903566] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.255 [2024-11-22 14:54:26.903580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.255 [2024-11-22 14:54:26.903584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.903597] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:12.255 [2024-11-22 14:54:26.903603] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:12.255 [2024-11-22 14:54:26.903607] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:12.255 [2024-11-22 14:54:26.903612] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:12.255 [2024-11-22 14:54:26.903617] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:12.255 [2024-11-22 14:54:26.903622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.255 [2024-11-22 14:54:26.903697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.255 [2024-11-22 14:54:26.903746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.255 [2024-11-22 14:54:26.903753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.255 [2024-11-22 14:54:26.903757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.903768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.255 [2024-11-22 14:54:26.903789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.255 [2024-11-22 14:54:26.903808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.255 [2024-11-22 14:54:26.903827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.255 [2024-11-22 14:54:26.903845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.903865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.903869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.903876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.255 [2024-11-22 14:54:26.903896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15740, cid 0, qid 0 00:16:12.255 [2024-11-22 14:54:26.903903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe158c0, cid 1, qid 0 00:16:12.255 [2024-11-22 14:54:26.903908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15a40, cid 2, qid 0 00:16:12.255 [2024-11-22 14:54:26.903913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.255 [2024-11-22 14:54:26.903918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.255 [2024-11-22 14:54:26.904002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.255 [2024-11-22 14:54:26.904008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.255 [2024-11-22 14:54:26.904012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.904021] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:12.255 [2024-11-22 14:54:26.904027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.904035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.904047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.904055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.904070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.255 [2024-11-22 14:54:26.904087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.255 [2024-11-22 14:54:26.904139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.255 [2024-11-22 14:54:26.904146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.255 [2024-11-22 14:54:26.904150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.904217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.904230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:12.255 [2024-11-22 14:54:26.904238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.255 [2024-11-22 14:54:26.904250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.255 [2024-11-22 14:54:26.904268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.255 [2024-11-22 14:54:26.904333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.255 [2024-11-22 14:54:26.904339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.255 [2024-11-22 14:54:26.904343] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904347] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=4096, cccid=4 00:16:12.255 [2024-11-22 14:54:26.904352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15d40) on tqpair(0xdb1750): expected_datao=0, payload_size=4096 00:16:12.255 [2024-11-22 14:54:26.904356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904363] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904367] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.255 [2024-11-22 14:54:26.904381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.255 [2024-11-22 14:54:26.904398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.255 [2024-11-22 14:54:26.904403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.255 [2024-11-22 14:54:26.904436] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:12.256 [2024-11-22 14:54:26.904448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.904480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.904501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.256 [2024-11-22 14:54:26.904616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.256 [2024-11-22 14:54:26.904623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.256 [2024-11-22 14:54:26.904626] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904630] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=4096, cccid=4 00:16:12.256 [2024-11-22 14:54:26.904635] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15d40) on tqpair(0xdb1750): expected_datao=0, payload_size=4096 00:16:12.256 [2024-11-22 14:54:26.904640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904647] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.904665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.904669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.904691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.904723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.904742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.256 [2024-11-22 14:54:26.904819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.256 [2024-11-22 14:54:26.904825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.256 [2024-11-22 14:54:26.904829] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904832] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=4096, cccid=4 00:16:12.256 [2024-11-22 14:54:26.904837] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15d40) on tqpair(0xdb1750): expected_datao=0, payload_size=4096 00:16:12.256 [2024-11-22 14:54:26.904841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904848] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904852] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.904866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.904869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.904882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904928] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:12.256 [2024-11-22 14:54:26.904933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:12.256 [2024-11-22 14:54:26.904939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:12.256 [2024-11-22 14:54:26.904955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.904967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.904974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.904982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.904988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.256 [2024-11-22 14:54:26.905012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.256 [2024-11-22 14:54:26.905019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15ec0, cid 5, qid 0 00:16:12.256 [2024-11-22 14:54:26.905085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.905092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.905095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.905106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.905112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.905116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15ec0) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.905129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15ec0, cid 5, qid 0 00:16:12.256 [2024-11-22 14:54:26.905199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.905206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.905209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15ec0) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.905223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15ec0, cid 5, qid 0 00:16:12.256 [2024-11-22 14:54:26.905294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.905300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.905304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15ec0) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.905318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15ec0, cid 5, qid 0 00:16:12.256 [2024-11-22 14:54:26.905416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.256 [2024-11-22 14:54:26.905424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.256 [2024-11-22 14:54:26.905428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15ec0) on tqpair=0xdb1750 00:16:12.256 [2024-11-22 14:54:26.905451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.256 [2024-11-22 14:54:26.905510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdb1750) 00:16:12.256 [2024-11-22 14:54:26.905516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.256 [2024-11-22 14:54:26.905538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15ec0, cid 5, qid 0 00:16:12.257 [2024-11-22 14:54:26.905545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15d40, cid 4, qid 0 00:16:12.257 [2024-11-22 14:54:26.905550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe16040, cid 6, qid 0 00:16:12.257 [2024-11-22 14:54:26.905555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe161c0, cid 7, qid 0 00:16:12.257 [2024-11-22 14:54:26.905700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.257 [2024-11-22 14:54:26.905707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.257 [2024-11-22 14:54:26.905711] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905714] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=8192, cccid=5 00:16:12.257 [2024-11-22 14:54:26.905719] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15ec0) on tqpair(0xdb1750): expected_datao=0, payload_size=8192 00:16:12.257 [2024-11-22 14:54:26.905724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905745] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.257 [2024-11-22 14:54:26.905756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.257 [2024-11-22 14:54:26.905760] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=512, cccid=4 00:16:12.257 [2024-11-22 14:54:26.905768] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe15d40) on tqpair(0xdb1750): expected_datao=0, payload_size=512 00:16:12.257 [2024-11-22 14:54:26.905772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905782] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.257 [2024-11-22 14:54:26.905793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.257 [2024-11-22 14:54:26.905796] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=512, cccid=6 00:16:12.257 [2024-11-22 14:54:26.905804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe16040) on tqpair(0xdb1750): expected_datao=0, payload_size=512 00:16:12.257 [2024-11-22 14:54:26.905808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:12.257 [2024-11-22 14:54:26.905829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:12.257 [2024-11-22 14:54:26.905832] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905836] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb1750): datao=0, datal=4096, cccid=7 00:16:12.257 [2024-11-22 14:54:26.905840] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe161c0) on tqpair(0xdb1750): expected_datao=0, payload_size=4096 00:16:12.257 [2024-11-22 14:54:26.905844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905851] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.257 [2024-11-22 14:54:26.905868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.257 [2024-11-22 14:54:26.905871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15ec0) on tqpair=0xdb1750 00:16:12.257 [2024-11-22 14:54:26.905891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.257 [2024-11-22 14:54:26.905897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.257 [2024-11-22 14:54:26.905901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15d40) on tqpair=0xdb1750 00:16:12.257 [2024-11-22 14:54:26.905917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.257 [2024-11-22 14:54:26.905924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.257 [2024-11-22 14:54:26.905927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.257 [2024-11-22 14:54:26.905931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe16040) on tqpair=0xdb1750 00:16:12.257 [2024-11-22 14:54:26.905938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.257 ===================================================== 00:16:12.257 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.257 ===================================================== 00:16:12.257 Controller Capabilities/Features 00:16:12.257 ================================ 00:16:12.257 Vendor ID: 8086 00:16:12.257 Subsystem Vendor ID: 8086 00:16:12.257 Serial Number: SPDK00000000000001 00:16:12.257 Model Number: SPDK bdev Controller 00:16:12.257 Firmware Version: 25.01 00:16:12.257 Recommended Arb Burst: 6 00:16:12.257 IEEE OUI Identifier: e4 d2 5c 00:16:12.257 Multi-path I/O 00:16:12.257 May have multiple subsystem ports: Yes 00:16:12.257 May have multiple controllers: Yes 00:16:12.257 Associated with SR-IOV VF: No 00:16:12.257 Max Data Transfer Size: 131072 00:16:12.257 Max Number of Namespaces: 32 00:16:12.257 Max Number of I/O Queues: 127 00:16:12.257 NVMe Specification Version (VS): 1.3 00:16:12.257 NVMe Specification Version (Identify): 1.3 00:16:12.257 Maximum Queue Entries: 128 00:16:12.257 Contiguous Queues Required: Yes 00:16:12.257 Arbitration Mechanisms Supported 00:16:12.257 Weighted Round Robin: Not Supported 00:16:12.257 Vendor Specific: Not Supported 00:16:12.257 Reset Timeout: 15000 ms 00:16:12.257 Doorbell Stride: 4 bytes 00:16:12.257 NVM Subsystem Reset: Not Supported 00:16:12.257 Command Sets Supported 00:16:12.257 NVM Command Set: Supported 00:16:12.257 Boot Partition: Not Supported 00:16:12.257 Memory Page Size Minimum: 4096 bytes 00:16:12.257 Memory Page Size Maximum: 4096 bytes 00:16:12.257 Persistent Memory Region: Not Supported 00:16:12.257 Optional Asynchronous Events Supported 00:16:12.257 Namespace Attribute Notices: Supported 00:16:12.257 Firmware Activation Notices: Not Supported 00:16:12.257 ANA Change Notices: Not Supported 00:16:12.257 PLE Aggregate Log Change Notices: Not Supported 00:16:12.257 LBA Status Info Alert Notices: Not Supported 00:16:12.257 EGE Aggregate Log Change Notices: Not Supported 00:16:12.257 Normal NVM Subsystem Shutdown event: Not Supported 00:16:12.257 Zone Descriptor Change Notices: Not Supported 00:16:12.257 Discovery Log Change Notices: Not Supported 00:16:12.257 Controller Attributes 00:16:12.257 128-bit Host Identifier: Supported 00:16:12.257 Non-Operational Permissive Mode: Not Supported 00:16:12.257 NVM Sets: Not Supported 00:16:12.257 Read Recovery Levels: Not Supported 00:16:12.257 Endurance Groups: Not Supported 00:16:12.257 Predictable Latency Mode: Not Supported 00:16:12.257 Traffic Based Keep ALive: Not Supported 00:16:12.257 Namespace Granularity: Not Supported 00:16:12.257 SQ Associations: Not Supported 00:16:12.257 UUID List: Not Supported 00:16:12.257 Multi-Domain Subsystem: Not Supported 00:16:12.257 Fixed Capacity Management: Not Supported 00:16:12.257 Variable Capacity Management: Not Supported 00:16:12.257 Delete Endurance Group: Not Supported 00:16:12.257 Delete NVM Set: Not Supported 00:16:12.257 Extended LBA Formats Supported: Not Supported 00:16:12.257 Flexible Data Placement Supported: Not Supported 00:16:12.257 00:16:12.257 Controller Memory Buffer Support 00:16:12.257 ================================ 00:16:12.257 Supported: No 00:16:12.257 00:16:12.257 Persistent Memory Region Support 00:16:12.257 ================================ 00:16:12.257 Supported: No 00:16:12.257 00:16:12.257 Admin Command Set Attributes 00:16:12.257 ============================ 00:16:12.257 Security Send/Receive: Not Supported 00:16:12.257 Format NVM: Not Supported 00:16:12.257 Firmware Activate/Download: Not Supported 00:16:12.257 Namespace Management: Not Supported 00:16:12.257 Device Self-Test: Not Supported 00:16:12.257 Directives: Not Supported 00:16:12.257 NVMe-MI: Not Supported 00:16:12.257 Virtualization Management: Not Supported 00:16:12.257 Doorbell Buffer Config: Not Supported 00:16:12.257 Get LBA Status Capability: Not Supported 00:16:12.257 Command & Feature Lockdown Capability: Not Supported 00:16:12.257 Abort Command Limit: 4 00:16:12.257 Async Event Request Limit: 4 00:16:12.257 Number of Firmware Slots: N/A 00:16:12.257 Firmware Slot 1 Read-Only: N/A 00:16:12.257 Firmware Activation Without Reset: N/A 00:16:12.257 Multiple Update Detection Support: N/A 00:16:12.257 Firmware Update Granularity: No Information Provided 00:16:12.257 Per-Namespace SMART Log: No 00:16:12.257 Asymmetric Namespace Access Log Page: Not Supported 00:16:12.257 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:12.257 Command Effects Log Page: Supported 00:16:12.257 Get Log Page Extended Data: Supported 00:16:12.257 Telemetry Log Pages: Not Supported 00:16:12.257 Persistent Event Log Pages: Not Supported 00:16:12.257 Supported Log Pages Log Page: May Support 00:16:12.257 Commands Supported & Effects Log Page: Not Supported 00:16:12.257 Feature Identifiers & Effects Log Page:May Support 00:16:12.257 NVMe-MI Commands & Effects Log Page: May Support 00:16:12.257 Data Area 4 for Telemetry Log: Not Supported 00:16:12.257 Error Log Page Entries Supported: 128 00:16:12.257 Keep Alive: Supported 00:16:12.257 Keep Alive Granularity: 10000 ms 00:16:12.258 00:16:12.258 NVM Command Set Attributes 00:16:12.258 ========================== 00:16:12.258 Submission Queue Entry Size 00:16:12.258 Max: 64 00:16:12.258 Min: 64 00:16:12.258 Completion Queue Entry Size 00:16:12.258 Max: 16 00:16:12.258 Min: 16 00:16:12.258 Number of Namespaces: 32 00:16:12.258 Compare Command: Supported 00:16:12.258 Write Uncorrectable Command: Not Supported 00:16:12.258 Dataset Management Command: Supported 00:16:12.258 Write Zeroes Command: Supported 00:16:12.258 Set Features Save Field: Not Supported 00:16:12.258 Reservations: Supported 00:16:12.258 Timestamp: Not Supported 00:16:12.258 Copy: Supported 00:16:12.258 Volatile Write Cache: Present 00:16:12.258 Atomic Write Unit (Normal): 1 00:16:12.258 Atomic Write Unit (PFail): 1 00:16:12.258 Atomic Compare & Write Unit: 1 00:16:12.258 Fused Compare & Write: Supported 00:16:12.258 Scatter-Gather List 00:16:12.258 SGL Command Set: Supported 00:16:12.258 SGL Keyed: Supported 00:16:12.258 SGL Bit Bucket Descriptor: Not Supported 00:16:12.258 SGL Metadata Pointer: Not Supported 00:16:12.258 Oversized SGL: Not Supported 00:16:12.258 SGL Metadata Address: Not Supported 00:16:12.258 SGL Offset: Supported 00:16:12.258 Transport SGL Data Block: Not Supported 00:16:12.258 Replay Protected Memory Block: Not Supported 00:16:12.258 00:16:12.258 Firmware Slot Information 00:16:12.258 ========================= 00:16:12.258 Active slot: 1 00:16:12.258 Slot 1 Firmware Revision: 25.01 00:16:12.258 00:16:12.258 00:16:12.258 Commands Supported and Effects 00:16:12.258 ============================== 00:16:12.258 Admin Commands 00:16:12.258 -------------- 00:16:12.258 Get Log Page (02h): Supported 00:16:12.258 Identify (06h): Supported 00:16:12.258 Abort (08h): Supported 00:16:12.258 Set Features (09h): Supported 00:16:12.258 Get Features (0Ah): Supported 00:16:12.258 Asynchronous Event Request (0Ch): Supported 00:16:12.258 Keep Alive (18h): Supported 00:16:12.258 I/O Commands 00:16:12.258 ------------ 00:16:12.258 Flush (00h): Supported LBA-Change 00:16:12.258 Write (01h): Supported LBA-Change 00:16:12.258 Read (02h): Supported 00:16:12.258 Compare (05h): Supported 00:16:12.258 Write Zeroes (08h): Supported LBA-Change 00:16:12.258 Dataset Management (09h): Supported LBA-Change 00:16:12.258 Copy (19h): Supported LBA-Change 00:16:12.258 00:16:12.258 Error Log 00:16:12.258 ========= 00:16:12.258 00:16:12.258 Arbitration 00:16:12.258 =========== 00:16:12.258 Arbitration Burst: 1 00:16:12.258 00:16:12.258 Power Management 00:16:12.258 ================ 00:16:12.258 Number of Power States: 1 00:16:12.258 Current Power State: Power State #0 00:16:12.258 Power State #0: 00:16:12.258 Max Power: 0.00 W 00:16:12.258 Non-Operational State: Operational 00:16:12.258 Entry Latency: Not Reported 00:16:12.258 Exit Latency: Not Reported 00:16:12.258 Relative Read Throughput: 0 00:16:12.258 Relative Read Latency: 0 00:16:12.258 Relative Write Throughput: 0 00:16:12.258 Relative Write Latency: 0 00:16:12.258 Idle Power: Not Reported 00:16:12.258 Active Power: Not Reported 00:16:12.258 Non-Operational Permissive Mode: Not Supported 00:16:12.258 00:16:12.258 Health Information 00:16:12.258 ================== 00:16:12.258 Critical Warnings: 00:16:12.258 Available Spare Space: OK 00:16:12.258 Temperature: OK 00:16:12.258 Device Reliability: OK 00:16:12.258 Read Only: No 00:16:12.258 Volatile Memory Backup: OK 00:16:12.258 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:12.258 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:12.258 Available Spare: 0% 00:16:12.258 Available Spare Threshold: 0% 00:16:12.258 Life Percentage Used:[2024-11-22 14:54:26.905944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.258 [2024-11-22 14:54:26.905947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.905951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe161c0) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.906068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdb1750) 00:16:12.258 [2024-11-22 14:54:26.906076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.258 [2024-11-22 14:54:26.906098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe161c0, cid 7, qid 0 00:16:12.258 [2024-11-22 14:54:26.906148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.258 [2024-11-22 14:54:26.906155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.258 [2024-11-22 14:54:26.906159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.906162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe161c0) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906202] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:12.258 [2024-11-22 14:54:26.906213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15740) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.258 [2024-11-22 14:54:26.906226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe158c0) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.258 [2024-11-22 14:54:26.906236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15a40) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.258 [2024-11-22 14:54:26.906245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.258 [2024-11-22 14:54:26.906250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.258 [2024-11-22 14:54:26.906259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.906278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.906283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.258 [2024-11-22 14:54:26.906290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.258 [2024-11-22 14:54:26.906312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.258 [2024-11-22 14:54:26.906360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.258 [2024-11-22 14:54:26.906367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.258 [2024-11-22 14:54:26.906371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.258 [2024-11-22 14:54:26.906375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.520 [2024-11-22 14:54:26.906383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.520 [2024-11-22 14:54:26.911462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.520 [2024-11-22 14:54:26.911494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.520 [2024-11-22 14:54:26.911561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.520 [2024-11-22 14:54:26.911568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.520 [2024-11-22 14:54:26.911572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.520 [2024-11-22 14:54:26.911581] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:12.520 [2024-11-22 14:54:26.911586] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:12.520 [2024-11-22 14:54:26.911596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.520 [2024-11-22 14:54:26.911612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.520 [2024-11-22 14:54:26.911646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.520 [2024-11-22 14:54:26.911694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.520 [2024-11-22 14:54:26.911701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.520 [2024-11-22 14:54:26.911704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.520 [2024-11-22 14:54:26.911719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.520 [2024-11-22 14:54:26.911727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.520 [2024-11-22 14:54:26.911734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.520 [2024-11-22 14:54:26.911751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.911794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.911801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.911805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.911819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.911834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.911851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.911895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.911902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.911906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.911919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.911928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.911935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.911951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.911995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.521 [2024-11-22 14:54:26.912862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.521 [2024-11-22 14:54:26.912879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.521 [2024-11-22 14:54:26.912926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.521 [2024-11-22 14:54:26.912933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.521 [2024-11-22 14:54:26.912936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.521 [2024-11-22 14:54:26.912940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.521 [2024-11-22 14:54:26.912950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.912954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.912958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.912965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.912981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.913909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.913926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.913942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.913985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.913991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.913995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.913998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.914008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.914013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.914017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.914024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.914040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.914087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.914094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.914097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.914101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.522 [2024-11-22 14:54:26.914128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.914132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.522 [2024-11-22 14:54:26.914136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.522 [2024-11-22 14:54:26.914143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.522 [2024-11-22 14:54:26.914160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.522 [2024-11-22 14:54:26.914209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.522 [2024-11-22 14:54:26.914216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.522 [2024-11-22 14:54:26.914219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.914915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.914923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.914930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.914946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.914988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.914995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.914998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.915012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.915027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.915044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.915089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.915095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.915099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.915112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.915127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.915144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.915188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.915195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.915198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.915212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.915228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.915244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.915289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.915295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.915299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.915312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.915320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.915328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.915344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.920438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.920460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.523 [2024-11-22 14:54:26.920482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.920486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.523 [2024-11-22 14:54:26.920501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.920506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:12.523 [2024-11-22 14:54:26.920510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb1750) 00:16:12.523 [2024-11-22 14:54:26.920518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.523 [2024-11-22 14:54:26.920543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe15bc0, cid 3, qid 0 00:16:12.523 [2024-11-22 14:54:26.920600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:12.523 [2024-11-22 14:54:26.920606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:12.524 [2024-11-22 14:54:26.920610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:12.524 [2024-11-22 14:54:26.920614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe15bc0) on tqpair=0xdb1750 00:16:12.524 [2024-11-22 14:54:26.920622] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 9 milliseconds 00:16:12.524 0% 00:16:12.524 Data Units Read: 0 00:16:12.524 Data Units Written: 0 00:16:12.524 Host Read Commands: 0 00:16:12.524 Host Write Commands: 0 00:16:12.524 Controller Busy Time: 0 minutes 00:16:12.524 Power Cycles: 0 00:16:12.524 Power On Hours: 0 hours 00:16:12.524 Unsafe Shutdowns: 0 00:16:12.524 Unrecoverable Media Errors: 0 00:16:12.524 Lifetime Error Log Entries: 0 00:16:12.524 Warning Temperature Time: 0 minutes 00:16:12.524 Critical Temperature Time: 0 minutes 00:16:12.524 00:16:12.524 Number of Queues 00:16:12.524 ================ 00:16:12.524 Number of I/O Submission Queues: 127 00:16:12.524 Number of I/O Completion Queues: 127 00:16:12.524 00:16:12.524 Active Namespaces 00:16:12.524 ================= 00:16:12.524 Namespace ID:1 00:16:12.524 Error Recovery Timeout: Unlimited 00:16:12.524 Command Set Identifier: NVM (00h) 00:16:12.524 Deallocate: Supported 00:16:12.524 Deallocated/Unwritten Error: Not Supported 00:16:12.524 Deallocated Read Value: Unknown 00:16:12.524 Deallocate in Write Zeroes: Not Supported 00:16:12.524 Deallocated Guard Field: 0xFFFF 00:16:12.524 Flush: Supported 00:16:12.524 Reservation: Supported 00:16:12.524 Namespace Sharing Capabilities: Multiple Controllers 00:16:12.524 Size (in LBAs): 131072 (0GiB) 00:16:12.524 Capacity (in LBAs): 131072 (0GiB) 00:16:12.524 Utilization (in LBAs): 131072 (0GiB) 00:16:12.524 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:12.524 EUI64: ABCDEF0123456789 00:16:12.524 UUID: 38754a70-6ce6-4212-a24b-65f87dc07f77 00:16:12.524 Thin Provisioning: Not Supported 00:16:12.524 Per-NS Atomic Units: Yes 00:16:12.524 Atomic Boundary Size (Normal): 0 00:16:12.524 Atomic Boundary Size (PFail): 0 00:16:12.524 Atomic Boundary Offset: 0 00:16:12.524 Maximum Single Source Range Length: 65535 00:16:12.524 Maximum Copy Length: 65535 00:16:12.524 Maximum Source Range Count: 1 00:16:12.524 NGUID/EUI64 Never Reused: No 00:16:12.524 Namespace Write Protected: No 00:16:12.524 Number of LBA Formats: 1 00:16:12.524 Current LBA Format: LBA Format #00 00:16:12.524 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:12.524 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.524 14:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.524 rmmod nvme_tcp 00:16:12.524 rmmod nvme_fabrics 00:16:12.524 rmmod nvme_keyring 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74375 ']' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74375 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74375 ']' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74375 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74375 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.524 killing process with pid 74375 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74375' 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74375 00:16:12.524 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74375 00:16:12.783 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.783 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:12.784 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:13.043 00:16:13.043 real 0m3.029s 00:16:13.043 user 0m7.638s 00:16:13.043 sys 0m0.853s 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.043 14:54:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:13.043 ************************************ 00:16:13.043 END TEST nvmf_identify 00:16:13.043 ************************************ 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.303 ************************************ 00:16:13.303 START TEST nvmf_perf 00:16:13.303 ************************************ 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:13.303 * Looking for test storage... 00:16:13.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.303 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.303 --rc genhtml_branch_coverage=1 00:16:13.303 --rc genhtml_function_coverage=1 00:16:13.303 --rc genhtml_legend=1 00:16:13.304 --rc geninfo_all_blocks=1 00:16:13.304 --rc geninfo_unexecuted_blocks=1 00:16:13.304 00:16:13.304 ' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.304 --rc genhtml_branch_coverage=1 00:16:13.304 --rc genhtml_function_coverage=1 00:16:13.304 --rc genhtml_legend=1 00:16:13.304 --rc geninfo_all_blocks=1 00:16:13.304 --rc geninfo_unexecuted_blocks=1 00:16:13.304 00:16:13.304 ' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.304 --rc genhtml_branch_coverage=1 00:16:13.304 --rc genhtml_function_coverage=1 00:16:13.304 --rc genhtml_legend=1 00:16:13.304 --rc geninfo_all_blocks=1 00:16:13.304 --rc geninfo_unexecuted_blocks=1 00:16:13.304 00:16:13.304 ' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.304 --rc genhtml_branch_coverage=1 00:16:13.304 --rc genhtml_function_coverage=1 00:16:13.304 --rc genhtml_legend=1 00:16:13.304 --rc geninfo_all_blocks=1 00:16:13.304 --rc geninfo_unexecuted_blocks=1 00:16:13.304 00:16:13.304 ' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:13.304 Cannot find device "nvmf_init_br" 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:13.304 Cannot find device "nvmf_init_br2" 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:13.304 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:13.564 Cannot find device "nvmf_tgt_br" 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.564 Cannot find device "nvmf_tgt_br2" 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:13.564 Cannot find device "nvmf_init_br" 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:13.564 14:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:13.564 Cannot find device "nvmf_init_br2" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:13.564 Cannot find device "nvmf_tgt_br" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:13.564 Cannot find device "nvmf_tgt_br2" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:13.564 Cannot find device "nvmf_br" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:13.564 Cannot find device "nvmf_init_if" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:13.564 Cannot find device "nvmf_init_if2" 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.564 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:13.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:13.824 00:16:13.824 --- 10.0.0.3 ping statistics --- 00:16:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.824 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:13.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:13.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:16:13.824 00:16:13.824 --- 10.0.0.4 ping statistics --- 00:16:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.824 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:13.824 00:16:13.824 --- 10.0.0.1 ping statistics --- 00:16:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.824 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:13.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:13.824 00:16:13.824 --- 10.0.0.2 ping statistics --- 00:16:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.824 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74636 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74636 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74636 ']' 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.824 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:13.824 [2024-11-22 14:54:28.402026] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:13.824 [2024-11-22 14:54:28.402124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.083 [2024-11-22 14:54:28.549487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.083 [2024-11-22 14:54:28.602768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.083 [2024-11-22 14:54:28.602833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.083 [2024-11-22 14:54:28.602843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.083 [2024-11-22 14:54:28.602850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.083 [2024-11-22 14:54:28.602856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.083 [2024-11-22 14:54:28.604245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.083 [2024-11-22 14:54:28.604429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.083 [2024-11-22 14:54:28.604511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.083 [2024-11-22 14:54:28.604514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.083 [2024-11-22 14:54:28.679045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:14.342 14:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:14.912 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:14.912 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:14.912 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:14.912 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.478 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:15.478 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:15.478 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:15.478 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:15.478 14:54:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:15.736 [2024-11-22 14:54:30.190018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.736 14:54:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.994 14:54:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:15.994 14:54:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.252 14:54:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:16.252 14:54:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:16.509 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:16.767 [2024-11-22 14:54:31.287335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:16.767 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:17.025 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:17.025 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:17.025 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:17.025 14:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:18.402 Initializing NVMe Controllers 00:16:18.402 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:18.402 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:18.402 Initialization complete. Launching workers. 00:16:18.402 ======================================================== 00:16:18.402 Latency(us) 00:16:18.402 Device Information : IOPS MiB/s Average min max 00:16:18.402 PCIE (0000:00:10.0) NSID 1 from core 0: 21440.00 83.75 1491.85 329.45 8432.74 00:16:18.402 ======================================================== 00:16:18.402 Total : 21440.00 83.75 1491.85 329.45 8432.74 00:16:18.402 00:16:18.402 14:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:19.336 Initializing NVMe Controllers 00:16:19.336 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.336 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:19.336 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:19.336 Initialization complete. Launching workers. 00:16:19.336 ======================================================== 00:16:19.336 Latency(us) 00:16:19.336 Device Information : IOPS MiB/s Average min max 00:16:19.336 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3700.00 14.45 269.96 94.80 7155.41 00:16:19.336 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.45 5989.59 14971.92 00:16:19.336 ======================================================== 00:16:19.336 Total : 3824.00 14.94 524.53 94.80 14971.92 00:16:19.336 00:16:19.593 14:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:20.970 Initializing NVMe Controllers 00:16:20.970 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:20.970 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:20.970 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:20.970 Initialization complete. Launching workers. 00:16:20.970 ======================================================== 00:16:20.970 Latency(us) 00:16:20.970 Device Information : IOPS MiB/s Average min max 00:16:20.970 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8642.88 33.76 3702.50 623.04 9372.10 00:16:20.970 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3699.82 14.45 8662.82 6283.06 16685.83 00:16:20.970 ======================================================== 00:16:20.970 Total : 12342.70 48.21 5189.40 623.04 16685.83 00:16:20.970 00:16:20.970 14:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:20.970 14:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:23.518 Initializing NVMe Controllers 00:16:23.518 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:23.518 Controller IO queue size 128, less than required. 00:16:23.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.518 Controller IO queue size 128, less than required. 00:16:23.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.518 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:23.518 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:23.518 Initialization complete. Launching workers. 00:16:23.518 ======================================================== 00:16:23.518 Latency(us) 00:16:23.518 Device Information : IOPS MiB/s Average min max 00:16:23.518 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1541.28 385.32 84373.60 46493.49 295343.90 00:16:23.518 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 677.18 169.30 193056.91 93925.03 319390.63 00:16:23.518 ======================================================== 00:16:23.518 Total : 2218.46 554.62 117549.07 46493.49 319390.63 00:16:23.518 00:16:23.518 14:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:23.777 Initializing NVMe Controllers 00:16:23.777 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:23.777 Controller IO queue size 128, less than required. 00:16:23.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.777 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:23.777 Controller IO queue size 128, less than required. 00:16:23.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:23.777 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:23.777 WARNING: Some requested NVMe devices were skipped 00:16:23.777 No valid NVMe controllers or AIO or URING devices found 00:16:23.777 14:54:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:26.313 Initializing NVMe Controllers 00:16:26.313 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.313 Controller IO queue size 128, less than required. 00:16:26.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.313 Controller IO queue size 128, less than required. 00:16:26.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.313 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:26.313 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:26.313 Initialization complete. Launching workers. 00:16:26.313 00:16:26.313 ==================== 00:16:26.313 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:26.313 TCP transport: 00:16:26.313 polls: 8844 00:16:26.313 idle_polls: 5198 00:16:26.313 sock_completions: 3646 00:16:26.313 nvme_completions: 6175 00:16:26.313 submitted_requests: 9272 00:16:26.313 queued_requests: 1 00:16:26.313 00:16:26.313 ==================== 00:16:26.313 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:26.313 TCP transport: 00:16:26.313 polls: 9159 00:16:26.313 idle_polls: 5082 00:16:26.313 sock_completions: 4077 00:16:26.313 nvme_completions: 6617 00:16:26.313 submitted_requests: 9896 00:16:26.313 queued_requests: 1 00:16:26.313 ======================================================== 00:16:26.313 Latency(us) 00:16:26.313 Device Information : IOPS MiB/s Average min max 00:16:26.313 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1543.47 385.87 85752.20 47916.09 140647.19 00:16:26.313 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1653.97 413.49 77829.49 35486.38 110001.77 00:16:26.313 ======================================================== 00:16:26.313 Total : 3197.43 799.36 81653.95 35486.38 140647.19 00:16:26.313 00:16:26.313 14:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:26.313 14:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.572 rmmod nvme_tcp 00:16:26.572 rmmod nvme_fabrics 00:16:26.572 rmmod nvme_keyring 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:26.572 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74636 ']' 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74636 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74636 ']' 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74636 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74636 00:16:26.573 killing process with pid 74636 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74636' 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74636 00:16:26.573 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74636 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:27.512 14:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.512 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:27.770 00:16:27.770 real 0m14.499s 00:16:27.770 user 0m52.188s 00:16:27.770 sys 0m4.092s 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:27.770 ************************************ 00:16:27.770 END TEST nvmf_perf 00:16:27.770 ************************************ 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.770 ************************************ 00:16:27.770 START TEST nvmf_fio_host 00:16:27.770 ************************************ 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:27.770 * Looking for test storage... 00:16:27.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.770 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.030 --rc genhtml_branch_coverage=1 00:16:28.030 --rc genhtml_function_coverage=1 00:16:28.030 --rc genhtml_legend=1 00:16:28.030 --rc geninfo_all_blocks=1 00:16:28.030 --rc geninfo_unexecuted_blocks=1 00:16:28.030 00:16:28.030 ' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.030 --rc genhtml_branch_coverage=1 00:16:28.030 --rc genhtml_function_coverage=1 00:16:28.030 --rc genhtml_legend=1 00:16:28.030 --rc geninfo_all_blocks=1 00:16:28.030 --rc geninfo_unexecuted_blocks=1 00:16:28.030 00:16:28.030 ' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.030 --rc genhtml_branch_coverage=1 00:16:28.030 --rc genhtml_function_coverage=1 00:16:28.030 --rc genhtml_legend=1 00:16:28.030 --rc geninfo_all_blocks=1 00:16:28.030 --rc geninfo_unexecuted_blocks=1 00:16:28.030 00:16:28.030 ' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.030 --rc genhtml_branch_coverage=1 00:16:28.030 --rc genhtml_function_coverage=1 00:16:28.030 --rc genhtml_legend=1 00:16:28.030 --rc geninfo_all_blocks=1 00:16:28.030 --rc geninfo_unexecuted_blocks=1 00:16:28.030 00:16:28.030 ' 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.030 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:28.031 Cannot find device "nvmf_init_br" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:28.031 Cannot find device "nvmf_init_br2" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:28.031 Cannot find device "nvmf_tgt_br" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.031 Cannot find device "nvmf_tgt_br2" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:28.031 Cannot find device "nvmf_init_br" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:28.031 Cannot find device "nvmf_init_br2" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:28.031 Cannot find device "nvmf_tgt_br" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:28.031 Cannot find device "nvmf_tgt_br2" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:28.031 Cannot find device "nvmf_br" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:28.031 Cannot find device "nvmf_init_if" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:28.031 Cannot find device "nvmf_init_if2" 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.031 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.032 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:28.032 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:28.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:28.291 00:16:28.291 --- 10.0.0.3 ping statistics --- 00:16:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.291 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:28.291 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:28.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:28.291 00:16:28.291 --- 10.0.0.4 ping statistics --- 00:16:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.291 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:28.291 00:16:28.291 --- 10.0.0.1 ping statistics --- 00:16:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.291 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:28.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:16:28.291 00:16:28.291 --- 10.0.0.2 ping statistics --- 00:16:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.291 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75106 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75106 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75106 ']' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.291 14:54:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.550 [2024-11-22 14:54:43.005715] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:28.550 [2024-11-22 14:54:43.005859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.550 [2024-11-22 14:54:43.163301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.809 [2024-11-22 14:54:43.251979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.809 [2024-11-22 14:54:43.252073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.809 [2024-11-22 14:54:43.252101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.809 [2024-11-22 14:54:43.252112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.809 [2024-11-22 14:54:43.252121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.809 [2024-11-22 14:54:43.253776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.809 [2024-11-22 14:54:43.253929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.809 [2024-11-22 14:54:43.254047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.809 [2024-11-22 14:54:43.254050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.809 [2024-11-22 14:54:43.331413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.378 14:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.378 14:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:29.378 14:54:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.637 [2024-11-22 14:54:44.180525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.637 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:29.637 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.637 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.637 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:29.897 Malloc1 00:16:30.156 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.156 14:54:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.416 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:30.675 [2024-11-22 14:54:45.242648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:30.675 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:30.933 14:54:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:31.192 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:31.192 fio-3.35 00:16:31.192 Starting 1 thread 00:16:33.728 00:16:33.728 test: (groupid=0, jobs=1): err= 0: pid=75189: Fri Nov 22 14:54:47 2024 00:16:33.728 read: IOPS=8176, BW=31.9MiB/s (33.5MB/s)(64.1MiB/2007msec) 00:16:33.728 slat (nsec): min=1718, max=399442, avg=2407.29, stdev=4168.91 00:16:33.728 clat (usec): min=2686, max=20693, avg=8164.82, stdev=1680.68 00:16:33.728 lat (usec): min=2742, max=20695, avg=8167.23, stdev=1680.54 00:16:33.728 clat percentiles (usec): 00:16:33.728 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6652], 20.00th=[ 7046], 00:16:33.728 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8160], 00:16:33.728 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[11600], 00:16:33.728 | 99.00th=[14877], 99.50th=[15795], 99.90th=[19792], 99.95th=[20055], 00:16:33.728 | 99.99th=[20579] 00:16:33.728 bw ( KiB/s): min=28998, max=36328, per=99.96%, avg=32693.50, stdev=3028.89, samples=4 00:16:33.728 iops : min= 7249, max= 9082, avg=8173.25, stdev=757.43, samples=4 00:16:33.728 write: IOPS=8178, BW=31.9MiB/s (33.5MB/s)(64.1MiB/2007msec); 0 zone resets 00:16:33.728 slat (nsec): min=1770, max=258479, avg=2460.36, stdev=2727.93 00:16:33.728 clat (usec): min=2545, max=19420, avg=7425.60, stdev=1496.52 00:16:33.728 lat (usec): min=2559, max=19422, avg=7428.06, stdev=1496.45 00:16:33.728 clat percentiles (usec): 00:16:33.728 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6456], 00:16:33.728 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7439], 00:16:33.728 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[10421], 00:16:33.728 | 99.00th=[13435], 99.50th=[14746], 99.90th=[17433], 99.95th=[18482], 00:16:33.728 | 99.99th=[19268] 00:16:33.728 bw ( KiB/s): min=30011, max=35608, per=99.87%, avg=32674.75, stdev=2423.87, samples=4 00:16:33.728 iops : min= 7502, max= 8902, avg=8168.50, stdev=606.24, samples=4 00:16:33.728 lat (msec) : 4=0.09%, 10=94.00%, 20=5.89%, 50=0.02% 00:16:33.728 cpu : usr=68.69%, sys=24.43%, ctx=7, majf=0, minf=7 00:16:33.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:33.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.728 issued rwts: total=16411,16415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.728 00:16:33.728 Run status group 0 (all jobs): 00:16:33.728 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.1MiB (67.2MB), run=2007-2007msec 00:16:33.728 WRITE: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.1MiB (67.2MB), run=2007-2007msec 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.728 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:33.729 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:33.729 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.729 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:33.729 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.729 14:54:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:33.729 14:54:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:33.729 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:33.729 fio-3.35 00:16:33.729 Starting 1 thread 00:16:36.255 00:16:36.255 test: (groupid=0, jobs=1): err= 0: pid=75232: Fri Nov 22 14:54:50 2024 00:16:36.255 read: IOPS=8029, BW=125MiB/s (132MB/s)(252MiB/2008msec) 00:16:36.255 slat (usec): min=2, max=118, avg= 3.73, stdev= 2.46 00:16:36.255 clat (usec): min=3083, max=18306, avg=8821.50, stdev=2458.24 00:16:36.255 lat (usec): min=3086, max=18309, avg=8825.22, stdev=2458.33 00:16:36.255 clat percentiles (usec): 00:16:36.255 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 5735], 20.00th=[ 6521], 00:16:36.255 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9372], 00:16:36.255 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11994], 95.00th=[13042], 00:16:36.255 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16712], 99.95th=[16909], 00:16:36.255 | 99.99th=[17433] 00:16:36.255 bw ( KiB/s): min=57216, max=72064, per=51.16%, avg=65728.00, stdev=6257.67, samples=4 00:16:36.255 iops : min= 3576, max= 4504, avg=4108.00, stdev=391.10, samples=4 00:16:36.255 write: IOPS=4795, BW=74.9MiB/s (78.6MB/s)(135MiB/1799msec); 0 zone resets 00:16:36.255 slat (usec): min=29, max=366, avg=38.00, stdev= 9.99 00:16:36.255 clat (usec): min=5963, max=24474, avg=12538.82, stdev=2527.39 00:16:36.255 lat (usec): min=5994, max=24511, avg=12576.83, stdev=2529.42 00:16:36.255 clat percentiles (usec): 00:16:36.255 | 1.00th=[ 7570], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10421], 00:16:36.255 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:16:36.255 | 70.00th=[13698], 80.00th=[14615], 90.00th=[16057], 95.00th=[17171], 00:16:36.255 | 99.00th=[19268], 99.50th=[20055], 99.90th=[22152], 99.95th=[22676], 00:16:36.255 | 99.99th=[24511] 00:16:36.255 bw ( KiB/s): min=59072, max=75200, per=89.40%, avg=68592.00, stdev=6840.54, samples=4 00:16:36.255 iops : min= 3692, max= 4700, avg=4287.00, stdev=427.53, samples=4 00:16:36.255 lat (msec) : 4=0.14%, 10=48.84%, 20=50.86%, 50=0.17% 00:16:36.255 cpu : usr=78.62%, sys=16.69%, ctx=2, majf=0, minf=18 00:16:36.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:36.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.255 issued rwts: total=16123,8627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.256 00:16:36.256 Run status group 0 (all jobs): 00:16:36.256 READ: bw=125MiB/s (132MB/s), 125MiB/s-125MiB/s (132MB/s-132MB/s), io=252MiB (264MB), run=2008-2008msec 00:16:36.256 WRITE: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=135MiB (141MB), run=1799-1799msec 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.256 rmmod nvme_tcp 00:16:36.256 rmmod nvme_fabrics 00:16:36.256 rmmod nvme_keyring 00:16:36.256 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75106 ']' 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75106 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75106 ']' 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75106 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75106 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.514 killing process with pid 75106 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75106' 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75106 00:16:36.514 14:54:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75106 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.773 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:37.031 00:16:37.031 real 0m9.204s 00:16:37.031 user 0m35.963s 00:16:37.031 sys 0m2.718s 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.031 ************************************ 00:16:37.031 END TEST nvmf_fio_host 00:16:37.031 ************************************ 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.031 ************************************ 00:16:37.031 START TEST nvmf_failover 00:16:37.031 ************************************ 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:37.031 * Looking for test storage... 00:16:37.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:16:37.031 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.291 --rc genhtml_branch_coverage=1 00:16:37.291 --rc genhtml_function_coverage=1 00:16:37.291 --rc genhtml_legend=1 00:16:37.291 --rc geninfo_all_blocks=1 00:16:37.291 --rc geninfo_unexecuted_blocks=1 00:16:37.291 00:16:37.291 ' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.291 --rc genhtml_branch_coverage=1 00:16:37.291 --rc genhtml_function_coverage=1 00:16:37.291 --rc genhtml_legend=1 00:16:37.291 --rc geninfo_all_blocks=1 00:16:37.291 --rc geninfo_unexecuted_blocks=1 00:16:37.291 00:16:37.291 ' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.291 --rc genhtml_branch_coverage=1 00:16:37.291 --rc genhtml_function_coverage=1 00:16:37.291 --rc genhtml_legend=1 00:16:37.291 --rc geninfo_all_blocks=1 00:16:37.291 --rc geninfo_unexecuted_blocks=1 00:16:37.291 00:16:37.291 ' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.291 --rc genhtml_branch_coverage=1 00:16:37.291 --rc genhtml_function_coverage=1 00:16:37.291 --rc genhtml_legend=1 00:16:37.291 --rc geninfo_all_blocks=1 00:16:37.291 --rc geninfo_unexecuted_blocks=1 00:16:37.291 00:16:37.291 ' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:37.291 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:37.292 Cannot find device "nvmf_init_br" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:37.292 Cannot find device "nvmf_init_br2" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:37.292 Cannot find device "nvmf_tgt_br" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.292 Cannot find device "nvmf_tgt_br2" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:37.292 Cannot find device "nvmf_init_br" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:37.292 Cannot find device "nvmf_init_br2" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:37.292 Cannot find device "nvmf_tgt_br" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:37.292 Cannot find device "nvmf_tgt_br2" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:37.292 Cannot find device "nvmf_br" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:37.292 Cannot find device "nvmf_init_if" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:37.292 Cannot find device "nvmf_init_if2" 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.292 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:37.551 14:54:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:37.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:16:37.551 00:16:37.551 --- 10.0.0.3 ping statistics --- 00:16:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.551 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:37.551 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:37.551 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:16:37.551 00:16:37.551 --- 10.0.0.4 ping statistics --- 00:16:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.551 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:37.551 00:16:37.551 --- 10.0.0.1 ping statistics --- 00:16:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.551 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:37.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:37.551 00:16:37.551 --- 10.0.0.2 ping statistics --- 00:16:37.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.551 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75509 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75509 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75509 ']' 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.551 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:37.551 [2024-11-22 14:54:52.208574] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:37.551 [2024-11-22 14:54:52.208652] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.841 [2024-11-22 14:54:52.353923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.841 [2024-11-22 14:54:52.412099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.841 [2024-11-22 14:54:52.412189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.841 [2024-11-22 14:54:52.412199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.841 [2024-11-22 14:54:52.412207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.841 [2024-11-22 14:54:52.412228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.841 [2024-11-22 14:54:52.413720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.841 [2024-11-22 14:54:52.413824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.841 [2024-11-22 14:54:52.413830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.115 [2024-11-22 14:54:52.496073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.115 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.115 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:38.115 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.116 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:38.116 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:38.116 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.116 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.374 [2024-11-22 14:54:52.912505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.374 14:54:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:38.631 Malloc0 00:16:38.631 14:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:38.889 14:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.147 14:54:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:39.405 [2024-11-22 14:54:54.014063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:39.405 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:39.663 [2024-11-22 14:54:54.266193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:39.663 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:39.922 [2024-11-22 14:54:54.510494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75559 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75559 /var/tmp/bdevperf.sock 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75559 ']' 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.922 14:54:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:41.297 14:54:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.297 14:54:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:41.297 14:54:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:41.297 NVMe0n1 00:16:41.297 14:54:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:41.555 00:16:41.555 14:54:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75587 00:16:41.555 14:54:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:41.556 14:54:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:42.930 14:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:42.930 14:54:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:46.212 14:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:46.212 00:16:46.212 14:55:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:46.470 14:55:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:49.754 14:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:49.754 [2024-11-22 14:55:04.323739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.754 14:55:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:50.689 14:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:50.947 14:55:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75587 00:16:57.514 { 00:16:57.514 "results": [ 00:16:57.514 { 00:16:57.514 "job": "NVMe0n1", 00:16:57.514 "core_mask": "0x1", 00:16:57.514 "workload": "verify", 00:16:57.514 "status": "finished", 00:16:57.514 "verify_range": { 00:16:57.514 "start": 0, 00:16:57.514 "length": 16384 00:16:57.514 }, 00:16:57.514 "queue_depth": 128, 00:16:57.514 "io_size": 4096, 00:16:57.514 "runtime": 15.009494, 00:16:57.514 "iops": 9953.16697551563, 00:16:57.514 "mibps": 38.87955849810793, 00:16:57.514 "io_failed": 3493, 00:16:57.514 "io_timeout": 0, 00:16:57.514 "avg_latency_us": 12536.81435117899, 00:16:57.514 "min_latency_us": 547.3745454545455, 00:16:57.514 "max_latency_us": 15728.64 00:16:57.514 } 00:16:57.514 ], 00:16:57.514 "core_count": 1 00:16:57.514 } 00:16:57.514 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75559 00:16:57.514 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75559 ']' 00:16:57.514 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75559 00:16:57.514 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:57.514 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75559 00:16:57.515 killing process with pid 75559 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75559' 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75559 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75559 00:16:57.515 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:57.515 [2024-11-22 14:54:54.588323] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:16:57.515 [2024-11-22 14:54:54.588516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75559 ] 00:16:57.515 [2024-11-22 14:54:54.735787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.515 [2024-11-22 14:54:54.809936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.515 [2024-11-22 14:54:54.887526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.515 Running I/O for 15 seconds... 00:16:57.515 7701.00 IOPS, 30.08 MiB/s [2024-11-22T14:55:12.180Z] [2024-11-22 14:54:57.441201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.515 [2024-11-22 14:54:57.441260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.515 [2024-11-22 14:54:57.441292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.515 [2024-11-22 14:54:57.441316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.515 [2024-11-22 14:54:57.441342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2710 is same with the state(6) to be set 00:16:57.515 [2024-11-22 14:54:57.441607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.515 [2024-11-22 14:54:57.441634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.441976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.441989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.515 [2024-11-22 14:54:57.442303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.515 [2024-11-22 14:54:57.442315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.442980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.442992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.516 [2024-11-22 14:54:57.443173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.516 [2024-11-22 14:54:57.443186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.443984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.443997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.517 [2024-11-22 14:54:57.444139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.517 [2024-11-22 14:54:57.444151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.518 [2024-11-22 14:54:57.444750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.444975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.444987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.445007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.518 [2024-11-22 14:54:57.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.518 [2024-11-22 14:54:57.445034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:54:57.445046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:54:57.445072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:54:57.445097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:54:57.445124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:54:57.445149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:54:57.445175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d220 is same with the state(6) to be set 00:16:57.519 [2024-11-22 14:54:57.445202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.519 [2024-11-22 14:54:57.445212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.519 [2024-11-22 14:54:57.445221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:16:57.519 [2024-11-22 14:54:57.445239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:54:57.445308] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:57.519 [2024-11-22 14:54:57.445327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:57.519 [2024-11-22 14:54:57.448609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:57.519 [2024-11-22 14:54:57.448645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2710 (9): Bad file descriptor 00:16:57.519 [2024-11-22 14:54:57.477431] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:57.519 8808.50 IOPS, 34.41 MiB/s [2024-11-22T14:55:12.184Z] 9425.00 IOPS, 36.82 MiB/s [2024-11-22T14:55:12.184Z] 9716.25 IOPS, 37.95 MiB/s [2024-11-22T14:55:12.184Z] [2024-11-22 14:55:01.055098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.519 [2024-11-22 14:55:01.055663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:55:01.055690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:55:01.055722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:55:01.055764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:55:01.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.519 [2024-11-22 14:55:01.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.519 [2024-11-22 14:55:01.055814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.055975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.055988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.520 [2024-11-22 14:55:01.056313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.520 [2024-11-22 14:55:01.056468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.520 [2024-11-22 14:55:01.056480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.521 [2024-11-22 14:55:01.056746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.056983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.056996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.521 [2024-11-22 14:55:01.057383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.521 [2024-11-22 14:55:01.057398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.522 [2024-11-22 14:55:01.057812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.057977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.057992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.522 [2024-11-22 14:55:01.058333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.522 [2024-11-22 14:55:01.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.523 [2024-11-22 14:55:01.058395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.523 [2024-11-22 14:55:01.058430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.523 [2024-11-22 14:55:01.058457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a91d0 is same with the state(6) to be set 00:16:57.523 [2024-11-22 14:55:01.058485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121728 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122184 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122192 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122200 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122208 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122216 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122224 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122232 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.523 [2024-11-22 14:55:01.058883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.523 [2024-11-22 14:55:01.058892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122240 len:8 PRP1 0x0 PRP2 0x0 00:16:57.523 [2024-11-22 14:55:01.058904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.058973] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:57.523 [2024-11-22 14:55:01.059026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.523 [2024-11-22 14:55:01.059046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.059060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.523 [2024-11-22 14:55:01.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.059085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.523 [2024-11-22 14:55:01.059096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.059110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.523 [2024-11-22 14:55:01.059121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:01.059133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:57.523 [2024-11-22 14:55:01.062455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:57.523 [2024-11-22 14:55:01.062493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2710 (9): Bad file descriptor 00:16:57.523 [2024-11-22 14:55:01.087525] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:57.523 9785.80 IOPS, 38.23 MiB/s [2024-11-22T14:55:12.188Z] 9897.50 IOPS, 38.66 MiB/s [2024-11-22T14:55:12.188Z] 9981.86 IOPS, 38.99 MiB/s [2024-11-22T14:55:12.188Z] 10043.12 IOPS, 39.23 MiB/s [2024-11-22T14:55:12.188Z] 10097.00 IOPS, 39.44 MiB/s [2024-11-22T14:55:12.188Z] [2024-11-22 14:55:05.561929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:05.562028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:05.562059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:05.562085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:05.562112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.523 [2024-11-22 14:55:05.562139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.523 [2024-11-22 14:55:05.562151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.524 [2024-11-22 14:55:05.562949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.562977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.562992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.563005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.563019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.563031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.563046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.563058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.524 [2024-11-22 14:55:05.563098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.524 [2024-11-22 14:55:05.563113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.525 [2024-11-22 14:55:05.563677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.525 [2024-11-22 14:55:05.563848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.525 [2024-11-22 14:55:05.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.563876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.563888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.563902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.563914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.563928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.563941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.563955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.563968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.563982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.526 [2024-11-22 14:55:05.564590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.526 [2024-11-22 14:55:05.564738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.526 [2024-11-22 14:55:05.564751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.564777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.564804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.564979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.564994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:57.527 [2024-11-22 14:55:05.565263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:57.527 [2024-11-22 14:55:05.565487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf71370 is same with the state(6) to be set 00:16:57.527 [2024-11-22 14:55:05.565517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.527 [2024-11-22 14:55:05.565528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.527 [2024-11-22 14:55:05.565539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110896 len:8 PRP1 0x0 PRP2 0x0 00:16:57.527 [2024-11-22 14:55:05.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.527 [2024-11-22 14:55:05.565576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.527 [2024-11-22 14:55:05.565587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111480 len:8 PRP1 0x0 PRP2 0x0 00:16:57.527 [2024-11-22 14:55:05.565599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.527 [2024-11-22 14:55:05.565621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.527 [2024-11-22 14:55:05.565631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111488 len:8 PRP1 0x0 PRP2 0x0 00:16:57.527 [2024-11-22 14:55:05.565653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.527 [2024-11-22 14:55:05.565666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.527 [2024-11-22 14:55:05.565676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111496 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.565712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.528 [2024-11-22 14:55:05.565721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111504 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.565755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.528 [2024-11-22 14:55:05.565780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111512 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.565814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.528 [2024-11-22 14:55:05.565823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111520 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.565863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.528 [2024-11-22 14:55:05.565872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111528 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.565907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:57.528 [2024-11-22 14:55:05.565916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:57.528 [2024-11-22 14:55:05.565926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111536 len:8 PRP1 0x0 PRP2 0x0 00:16:57.528 [2024-11-22 14:55:05.565954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.566027] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:57.528 [2024-11-22 14:55:05.566085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.528 [2024-11-22 14:55:05.566106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.566122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.528 [2024-11-22 14:55:05.566135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.566158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.528 [2024-11-22 14:55:05.566171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.566185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:57.528 [2024-11-22 14:55:05.566198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:57.528 [2024-11-22 14:55:05.566211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:57.528 [2024-11-22 14:55:05.566261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2710 (9): Bad file descriptor 00:16:57.528 [2024-11-22 14:55:05.569693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:57.528 [2024-11-22 14:55:05.593661] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:57.528 10091.20 IOPS, 39.42 MiB/s [2024-11-22T14:55:12.193Z] 10115.64 IOPS, 39.51 MiB/s [2024-11-22T14:55:12.193Z] 10124.00 IOPS, 39.55 MiB/s [2024-11-22T14:55:12.193Z] 10076.31 IOPS, 39.36 MiB/s [2024-11-22T14:55:12.193Z] 10007.43 IOPS, 39.09 MiB/s [2024-11-22T14:55:12.193Z] 9953.07 IOPS, 38.88 MiB/s 00:16:57.528 Latency(us) 00:16:57.528 [2024-11-22T14:55:12.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.528 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.528 Verification LBA range: start 0x0 length 0x4000 00:16:57.528 NVMe0n1 : 15.01 9953.17 38.88 232.72 0.00 12536.81 547.37 15728.64 00:16:57.528 [2024-11-22T14:55:12.193Z] =================================================================================================================== 00:16:57.528 [2024-11-22T14:55:12.193Z] Total : 9953.17 38.88 232.72 0.00 12536.81 547.37 15728.64 00:16:57.528 Received shutdown signal, test time was about 15.000000 seconds 00:16:57.528 00:16:57.528 Latency(us) 00:16:57.528 [2024-11-22T14:55:12.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.528 [2024-11-22T14:55:12.193Z] =================================================================================================================== 00:16:57.528 [2024-11-22T14:55:12.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75762 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75762 /var/tmp/bdevperf.sock 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75762 ']' 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.528 14:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:57.528 14:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.528 14:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:57.528 14:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:57.786 [2024-11-22 14:55:12.419087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:58.043 14:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:58.043 [2024-11-22 14:55:12.698136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:58.300 14:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:58.558 NVMe0n1 00:16:58.558 14:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:58.814 00:16:58.814 14:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:59.379 00:16:59.379 14:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:59.379 14:55:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:59.637 14:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:59.895 14:55:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:03.207 14:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.207 14:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:03.207 14:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75837 00:17:03.207 14:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.207 14:55:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75837 00:17:04.586 { 00:17:04.586 "results": [ 00:17:04.586 { 00:17:04.586 "job": "NVMe0n1", 00:17:04.586 "core_mask": "0x1", 00:17:04.586 "workload": "verify", 00:17:04.586 "status": "finished", 00:17:04.586 "verify_range": { 00:17:04.586 "start": 0, 00:17:04.586 "length": 16384 00:17:04.586 }, 00:17:04.586 "queue_depth": 128, 00:17:04.586 "io_size": 4096, 00:17:04.586 "runtime": 1.006183, 00:17:04.586 "iops": 7032.517941567289, 00:17:04.586 "mibps": 27.470773209247223, 00:17:04.586 "io_failed": 0, 00:17:04.586 "io_timeout": 0, 00:17:04.586 "avg_latency_us": 18111.26892851637, 00:17:04.586 "min_latency_us": 1228.8, 00:17:04.586 "max_latency_us": 15371.17090909091 00:17:04.586 } 00:17:04.586 ], 00:17:04.586 "core_count": 1 00:17:04.586 } 00:17:04.586 14:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:04.586 [2024-11-22 14:55:11.720646] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:17:04.586 [2024-11-22 14:55:11.720792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75762 ] 00:17:04.586 [2024-11-22 14:55:11.870079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.586 [2024-11-22 14:55:11.934963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.586 [2024-11-22 14:55:12.015951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.586 [2024-11-22 14:55:14.386619] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:17:04.586 [2024-11-22 14:55:14.386760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.586 [2024-11-22 14:55:14.386788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.586 [2024-11-22 14:55:14.386810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.586 [2024-11-22 14:55:14.386825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.586 [2024-11-22 14:55:14.386841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.586 [2024-11-22 14:55:14.386855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.586 [2024-11-22 14:55:14.386870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.586 [2024-11-22 14:55:14.386885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.586 [2024-11-22 14:55:14.386900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:04.586 [2024-11-22 14:55:14.386960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:04.586 [2024-11-22 14:55:14.386995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c8710 (9): Bad file descriptor 00:17:04.586 [2024-11-22 14:55:14.398232] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:04.586 Running I/O for 1 seconds... 00:17:04.586 6948.00 IOPS, 27.14 MiB/s 00:17:04.586 Latency(us) 00:17:04.586 [2024-11-22T14:55:19.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.586 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:04.586 Verification LBA range: start 0x0 length 0x4000 00:17:04.586 NVMe0n1 : 1.01 7032.52 27.47 0.00 0.00 18111.27 1228.80 15371.17 00:17:04.586 [2024-11-22T14:55:19.251Z] =================================================================================================================== 00:17:04.586 [2024-11-22T14:55:19.251Z] Total : 7032.52 27.47 0.00 0.00 18111.27 1228.80 15371.17 00:17:04.586 14:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:04.586 14:55:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:04.586 14:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.844 14:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:04.844 14:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:05.103 14:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:05.671 14:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75762 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75762 ']' 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75762 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75762 00:17:08.961 killing process with pid 75762 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75762' 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75762 00:17:08.961 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75762 00:17:09.220 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:09.220 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.479 rmmod nvme_tcp 00:17:09.479 rmmod nvme_fabrics 00:17:09.479 rmmod nvme_keyring 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75509 ']' 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75509 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75509 ']' 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75509 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.479 14:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75509 00:17:09.479 killing process with pid 75509 00:17:09.479 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.479 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.479 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75509' 00:17:09.479 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75509 00:17:09.479 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75509 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:09.739 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:09.998 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:09.999 00:17:09.999 real 0m32.985s 00:17:09.999 user 2m7.373s 00:17:09.999 sys 0m5.861s 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.999 ************************************ 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:09.999 END TEST nvmf_failover 00:17:09.999 ************************************ 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.999 ************************************ 00:17:09.999 START TEST nvmf_host_discovery 00:17:09.999 ************************************ 00:17:09.999 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:09.999 * Looking for test storage... 00:17:10.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:10.259 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.260 --rc genhtml_branch_coverage=1 00:17:10.260 --rc genhtml_function_coverage=1 00:17:10.260 --rc genhtml_legend=1 00:17:10.260 --rc geninfo_all_blocks=1 00:17:10.260 --rc geninfo_unexecuted_blocks=1 00:17:10.260 00:17:10.260 ' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.260 --rc genhtml_branch_coverage=1 00:17:10.260 --rc genhtml_function_coverage=1 00:17:10.260 --rc genhtml_legend=1 00:17:10.260 --rc geninfo_all_blocks=1 00:17:10.260 --rc geninfo_unexecuted_blocks=1 00:17:10.260 00:17:10.260 ' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.260 --rc genhtml_branch_coverage=1 00:17:10.260 --rc genhtml_function_coverage=1 00:17:10.260 --rc genhtml_legend=1 00:17:10.260 --rc geninfo_all_blocks=1 00:17:10.260 --rc geninfo_unexecuted_blocks=1 00:17:10.260 00:17:10.260 ' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.260 --rc genhtml_branch_coverage=1 00:17:10.260 --rc genhtml_function_coverage=1 00:17:10.260 --rc genhtml_legend=1 00:17:10.260 --rc geninfo_all_blocks=1 00:17:10.260 --rc geninfo_unexecuted_blocks=1 00:17:10.260 00:17:10.260 ' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:10.260 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:10.261 Cannot find device "nvmf_init_br" 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:10.261 Cannot find device "nvmf_init_br2" 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:10.261 Cannot find device "nvmf_tgt_br" 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.261 Cannot find device "nvmf_tgt_br2" 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:10.261 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:10.520 Cannot find device "nvmf_init_br" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:10.520 Cannot find device "nvmf_init_br2" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:10.520 Cannot find device "nvmf_tgt_br" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:10.520 Cannot find device "nvmf_tgt_br2" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:10.520 Cannot find device "nvmf_br" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:10.520 Cannot find device "nvmf_init_if" 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:10.520 14:55:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:10.520 Cannot find device "nvmf_init_if2" 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:10.520 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:10.521 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.521 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:10.521 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.521 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:10.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:10.780 00:17:10.780 --- 10.0.0.3 ping statistics --- 00:17:10.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.780 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:10.780 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:10.780 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:10.780 00:17:10.780 --- 10.0.0.4 ping statistics --- 00:17:10.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.780 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:10.780 00:17:10.780 --- 10.0.0.1 ping statistics --- 00:17:10.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.780 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:10.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:17:10.780 00:17:10.780 --- 10.0.0.2 ping statistics --- 00:17:10.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.780 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76164 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76164 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76164 ']' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.780 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.780 [2024-11-22 14:55:25.308747] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:17:10.780 [2024-11-22 14:55:25.308863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.039 [2024-11-22 14:55:25.463520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.039 [2024-11-22 14:55:25.536858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.039 [2024-11-22 14:55:25.536941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.039 [2024-11-22 14:55:25.536966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.039 [2024-11-22 14:55:25.536977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.039 [2024-11-22 14:55:25.536987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.039 [2024-11-22 14:55:25.537521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.039 [2024-11-22 14:55:25.615228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.039 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.039 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:11.039 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.039 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.039 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.298 [2024-11-22 14:55:25.739031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.298 [2024-11-22 14:55:25.747225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.298 null0 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:11.298 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.299 null1 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76194 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76194 /tmp/host.sock 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76194 ']' 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.299 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.299 14:55:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.299 [2024-11-22 14:55:25.841046] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:17:11.299 [2024-11-22 14:55:25.841160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76194 ] 00:17:11.557 [2024-11-22 14:55:25.988338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.557 [2024-11-22 14:55:26.042835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.557 [2024-11-22 14:55:26.114850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.125 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:12.384 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.385 14:55:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:12.385 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.643 [2024-11-22 14:55:27.143583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:12.643 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:12.903 14:55:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:13.162 [2024-11-22 14:55:27.812907] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:13.163 [2024-11-22 14:55:27.812976] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:13.163 [2024-11-22 14:55:27.813015] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:13.163 [2024-11-22 14:55:27.819007] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:13.421 [2024-11-22 14:55:27.873599] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:13.421 [2024-11-22 14:55:27.875042] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9d3e60:1 started. 00:17:13.421 [2024-11-22 14:55:27.877675] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:13.421 [2024-11-22 14:55:27.877712] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:13.421 [2024-11-22 14:55:27.881270] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9d3e60 was disconnected and freed. delete nvme_qpair. 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.991 [2024-11-22 14:55:28.635786] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9e2000:1 started. 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:13.991 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.991 [2024-11-22 14:55:28.641900] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9e2000 was disconnected and freed. delete nvme_qpair. 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.251 [2024-11-22 14:55:28.749822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:14.251 [2024-11-22 14:55:28.750195] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:14.251 [2024-11-22 14:55:28.750223] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:14.251 [2024-11-22 14:55:28.756208] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:14.251 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:14.251 [2024-11-22 14:55:28.816623] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:14.251 [2024-11-22 14:55:28.816683] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:14.251 [2024-11-22 14:55:28.816697] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:14.252 [2024-11-22 14:55:28.816703] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:14.252 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 [2024-11-22 14:55:28.987221] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:14.511 [2024-11-22 14:55:28.987267] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.511 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:14.511 [2024-11-22 14:55:28.993236] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:14.511 [2024-11-22 14:55:28.993270] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:14.512 [2024-11-22 14:55:28.993411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.512 [2024-11-22 14:55:28.993440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.512 [2024-11-22 14:55:28.993455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.512 [2024-11-22 14:55:28.993465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.512 [2024-11-22 14:55:28.993475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.512 [2024-11-22 14:55:28.993485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.512 [2024-11-22 14:55:28.993496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.512 [2024-11-22 14:55:28.993506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.512 [2024-11-22 14:55:28.993516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0230 is same with the state(6) to be set 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.512 14:55:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.775 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.776 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.778 14:55:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 [2024-11-22 14:55:30.403253] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:16.162 [2024-11-22 14:55:30.403280] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:16.162 [2024-11-22 14:55:30.403326] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:16.162 [2024-11-22 14:55:30.409296] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:16.162 [2024-11-22 14:55:30.467597] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:16.162 [2024-11-22 14:55:30.468538] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x9a8d90:1 started. 00:17:16.162 [2024-11-22 14:55:30.471001] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:16.162 [2024-11-22 14:55:30.471053] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:16.162 [2024-11-22 14:55:30.472582] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x9a8d90 was disconnected and freed. delete nvme_qpair. 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 request: 00:17:16.162 { 00:17:16.162 "name": "nvme", 00:17:16.162 "trtype": "tcp", 00:17:16.162 "traddr": "10.0.0.3", 00:17:16.162 "adrfam": "ipv4", 00:17:16.162 "trsvcid": "8009", 00:17:16.162 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:16.162 "wait_for_attach": true, 00:17:16.162 "method": "bdev_nvme_start_discovery", 00:17:16.162 "req_id": 1 00:17:16.162 } 00:17:16.162 Got JSON-RPC error response 00:17:16.162 response: 00:17:16.162 { 00:17:16.162 "code": -17, 00:17:16.162 "message": "File exists" 00:17:16.162 } 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 request: 00:17:16.162 { 00:17:16.162 "name": "nvme_second", 00:17:16.162 "trtype": "tcp", 00:17:16.162 "traddr": "10.0.0.3", 00:17:16.162 "adrfam": "ipv4", 00:17:16.162 "trsvcid": "8009", 00:17:16.162 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:16.162 "wait_for_attach": true, 00:17:16.162 "method": "bdev_nvme_start_discovery", 00:17:16.162 "req_id": 1 00:17:16.162 } 00:17:16.162 Got JSON-RPC error response 00:17:16.162 response: 00:17:16.162 { 00:17:16.162 "code": -17, 00:17:16.162 "message": "File exists" 00:17:16.162 } 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:16.162 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.163 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.163 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.163 14:55:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.099 [2024-11-22 14:55:31.727329] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.099 [2024-11-22 14:55:31.727420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a7420 with addr=10.0.0.3, port=8010 00:17:17.099 [2024-11-22 14:55:31.727457] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:17.099 [2024-11-22 14:55:31.727468] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:17.099 [2024-11-22 14:55:31.727476] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:18.067 [2024-11-22 14:55:32.727304] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.067 [2024-11-22 14:55:32.727376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a7420 with addr=10.0.0.3, port=8010 00:17:18.067 [2024-11-22 14:55:32.727408] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:18.067 [2024-11-22 14:55:32.727419] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:18.067 [2024-11-22 14:55:32.727427] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:19.443 [2024-11-22 14:55:33.727216] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:19.443 request: 00:17:19.443 { 00:17:19.443 "name": "nvme_second", 00:17:19.443 "trtype": "tcp", 00:17:19.443 "traddr": "10.0.0.3", 00:17:19.443 "adrfam": "ipv4", 00:17:19.443 "trsvcid": "8010", 00:17:19.443 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.443 "wait_for_attach": false, 00:17:19.443 "attach_timeout_ms": 3000, 00:17:19.443 "method": "bdev_nvme_start_discovery", 00:17:19.443 "req_id": 1 00:17:19.443 } 00:17:19.443 Got JSON-RPC error response 00:17:19.443 response: 00:17:19.443 { 00:17:19.443 "code": -110, 00:17:19.443 "message": "Connection timed out" 00:17:19.443 } 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76194 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.443 rmmod nvme_tcp 00:17:19.443 rmmod nvme_fabrics 00:17:19.443 rmmod nvme_keyring 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76164 ']' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76164 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76164 ']' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76164 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76164 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.443 killing process with pid 76164 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76164' 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76164 00:17:19.443 14:55:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76164 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:19.702 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:19.960 00:17:19.960 real 0m9.875s 00:17:19.960 user 0m18.692s 00:17:19.960 sys 0m2.189s 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.960 ************************************ 00:17:19.960 END TEST nvmf_host_discovery 00:17:19.960 ************************************ 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.960 ************************************ 00:17:19.960 START TEST nvmf_host_multipath_status 00:17:19.960 ************************************ 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:19.960 * Looking for test storage... 00:17:19.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.960 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.219 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:20.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.220 --rc genhtml_branch_coverage=1 00:17:20.220 --rc genhtml_function_coverage=1 00:17:20.220 --rc genhtml_legend=1 00:17:20.220 --rc geninfo_all_blocks=1 00:17:20.220 --rc geninfo_unexecuted_blocks=1 00:17:20.220 00:17:20.220 ' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:20.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.220 --rc genhtml_branch_coverage=1 00:17:20.220 --rc genhtml_function_coverage=1 00:17:20.220 --rc genhtml_legend=1 00:17:20.220 --rc geninfo_all_blocks=1 00:17:20.220 --rc geninfo_unexecuted_blocks=1 00:17:20.220 00:17:20.220 ' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:20.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.220 --rc genhtml_branch_coverage=1 00:17:20.220 --rc genhtml_function_coverage=1 00:17:20.220 --rc genhtml_legend=1 00:17:20.220 --rc geninfo_all_blocks=1 00:17:20.220 --rc geninfo_unexecuted_blocks=1 00:17:20.220 00:17:20.220 ' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:20.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.220 --rc genhtml_branch_coverage=1 00:17:20.220 --rc genhtml_function_coverage=1 00:17:20.220 --rc genhtml_legend=1 00:17:20.220 --rc geninfo_all_blocks=1 00:17:20.220 --rc geninfo_unexecuted_blocks=1 00:17:20.220 00:17:20.220 ' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.220 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.220 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:20.221 Cannot find device "nvmf_init_br" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:20.221 Cannot find device "nvmf_init_br2" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:20.221 Cannot find device "nvmf_tgt_br" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.221 Cannot find device "nvmf_tgt_br2" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:20.221 Cannot find device "nvmf_init_br" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:20.221 Cannot find device "nvmf_init_br2" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:20.221 Cannot find device "nvmf_tgt_br" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:20.221 Cannot find device "nvmf_tgt_br2" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:20.221 Cannot find device "nvmf_br" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:20.221 Cannot find device "nvmf_init_if" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:20.221 Cannot find device "nvmf_init_if2" 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.221 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.480 14:55:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:20.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:20.480 00:17:20.480 --- 10.0.0.3 ping statistics --- 00:17:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.480 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:20.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:20.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:17:20.480 00:17:20.480 --- 10.0.0.4 ping statistics --- 00:17:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.480 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:20.480 00:17:20.480 --- 10.0.0.1 ping statistics --- 00:17:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.480 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:20.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:20.480 00:17:20.480 --- 10.0.0.2 ping statistics --- 00:17:20.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.480 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.480 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.481 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76698 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76698 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76698 ']' 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.739 14:55:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:20.739 [2024-11-22 14:55:35.208342] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:17:20.739 [2024-11-22 14:55:35.208474] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.739 [2024-11-22 14:55:35.355768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:20.998 [2024-11-22 14:55:35.407739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.998 [2024-11-22 14:55:35.408010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.998 [2024-11-22 14:55:35.408084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.998 [2024-11-22 14:55:35.408168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.998 [2024-11-22 14:55:35.408254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.998 [2024-11-22 14:55:35.409586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.998 [2024-11-22 14:55:35.409598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.998 [2024-11-22 14:55:35.480434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76698 00:17:21.566 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.825 [2024-11-22 14:55:36.429590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.825 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:22.394 Malloc0 00:17:22.394 14:55:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:22.653 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.653 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:22.912 [2024-11-22 14:55:37.562683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:23.170 [2024-11-22 14:55:37.782694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76749 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76749 /var/tmp/bdevperf.sock 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76749 ']' 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.170 14:55:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:24.548 14:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.548 14:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:24.548 14:55:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:24.548 14:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:24.806 Nvme0n1 00:17:24.806 14:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:25.373 Nvme0n1 00:17:25.373 14:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:25.373 14:55:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:27.279 14:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:27.279 14:55:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:27.538 14:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:27.797 14:55:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:28.734 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:28.734 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:28.734 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.734 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:28.992 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.992 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:28.992 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:28.992 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.251 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:29.251 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:29.251 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.251 14:55:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:29.510 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:29.510 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:29.510 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.510 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:29.769 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:29.769 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:29.769 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.770 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:30.028 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.028 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:30.028 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.028 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:30.287 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.287 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:30.287 14:55:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:30.575 14:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:30.834 14:55:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.208 14:55:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:32.467 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.467 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:32.467 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.467 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:32.734 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.734 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:32.734 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.734 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.311 14:55:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:33.571 14:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.571 14:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:33.571 14:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:33.831 14:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:34.090 14:55:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:35.044 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:35.044 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:35.044 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.044 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:35.612 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.612 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:35.612 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:35.612 14:55:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.612 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:35.612 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:35.612 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.612 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:36.179 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.179 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:36.179 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:36.179 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.438 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.438 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:36.438 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.438 14:55:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:36.438 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.438 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:36.697 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.697 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:36.697 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.697 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:36.955 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:36.955 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:37.523 14:55:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:38.461 14:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:38.461 14:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:38.461 14:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.461 14:55:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:38.721 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.721 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:38.721 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.721 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:38.980 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:38.980 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:38.980 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.980 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:39.238 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.238 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:39.238 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.238 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:39.496 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.496 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:39.496 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:39.496 14:55:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.755 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.755 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:39.755 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.755 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:40.014 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:40.014 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:40.014 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:40.273 14:55:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:40.532 14:55:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:41.469 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:41.469 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:41.469 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.469 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:41.728 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:41.728 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:41.728 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.728 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:41.987 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:41.987 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:41.987 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:41.987 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.246 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.246 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:42.246 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:42.246 14:55:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.505 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.505 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:42.505 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.505 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:42.764 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:42.764 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:42.764 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:42.764 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.023 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:43.023 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:43.023 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:43.282 14:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:43.541 14:55:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:44.477 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:44.477 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:44.477 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.477 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:44.735 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.735 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:44.735 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:44.735 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.993 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.993 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:44.993 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.993 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:45.561 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.561 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:45.561 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:45.561 14:55:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.561 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.561 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:45.561 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.561 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:45.820 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:45.820 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:46.079 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.079 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:46.338 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.338 14:56:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:46.597 14:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:46.597 14:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:46.858 14:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:47.117 14:56:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:48.066 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:48.066 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:48.066 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.066 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:48.324 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.324 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:48.324 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.324 14:56:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:48.583 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.583 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:48.583 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.583 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:48.842 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.843 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:48.843 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.843 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:49.102 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.102 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:49.102 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.102 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:49.361 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.361 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:49.361 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.361 14:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:49.620 14:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.620 14:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:49.620 14:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:49.879 14:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:50.140 14:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:51.076 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:51.076 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:51.076 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.076 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:51.335 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:51.335 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:51.335 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.335 14:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.902 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.470 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.470 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:52.470 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.470 14:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:52.470 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.470 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:52.729 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.729 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:52.988 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.988 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:52.988 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:52.988 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:53.247 14:56:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:54.629 14:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:54.629 14:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:54.629 14:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.629 14:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:54.629 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.629 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:54.629 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.629 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:54.890 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.890 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.890 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.890 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:55.149 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.149 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:55.149 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.149 14:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:55.407 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.407 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:55.407 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.407 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.666 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.666 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:55.666 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.666 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.233 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.233 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:56.233 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:56.492 14:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:56.751 14:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:57.688 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:57.688 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:57.688 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.688 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.947 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.947 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.947 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.947 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:58.206 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.206 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:58.206 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:58.206 14:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.464 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.464 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:58.464 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.464 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.723 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.723 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:58.723 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.723 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:58.981 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.981 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:58.981 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.981 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76749 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76749 ']' 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76749 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76749 00:17:59.239 killing process with pid 76749 00:17:59.239 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.240 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.240 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76749' 00:17:59.240 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76749 00:17:59.240 14:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76749 00:17:59.240 { 00:17:59.240 "results": [ 00:17:59.240 { 00:17:59.240 "job": "Nvme0n1", 00:17:59.240 "core_mask": "0x4", 00:17:59.240 "workload": "verify", 00:17:59.240 "status": "terminated", 00:17:59.240 "verify_range": { 00:17:59.240 "start": 0, 00:17:59.240 "length": 16384 00:17:59.240 }, 00:17:59.240 "queue_depth": 128, 00:17:59.240 "io_size": 4096, 00:17:59.240 "runtime": 33.948755, 00:17:59.240 "iops": 9402.171007449317, 00:17:59.240 "mibps": 36.72723049784889, 00:17:59.240 "io_failed": 0, 00:17:59.240 "io_timeout": 0, 00:17:59.240 "avg_latency_us": 13585.690550822646, 00:17:59.240 "min_latency_us": 245.76, 00:17:59.240 "max_latency_us": 4057035.869090909 00:17:59.240 } 00:17:59.240 ], 00:17:59.240 "core_count": 1 00:17:59.240 } 00:17:59.508 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76749 00:17:59.508 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:59.508 [2024-11-22 14:55:37.849922] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:17:59.508 [2024-11-22 14:55:37.850018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76749 ] 00:17:59.508 [2024-11-22 14:55:37.994179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.508 [2024-11-22 14:55:38.053531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.508 [2024-11-22 14:55:38.124759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.508 Running I/O for 90 seconds... 00:17:59.508 8085.00 IOPS, 31.58 MiB/s [2024-11-22T14:56:14.173Z] 8138.50 IOPS, 31.79 MiB/s [2024-11-22T14:56:14.173Z] 8113.67 IOPS, 31.69 MiB/s [2024-11-22T14:56:14.173Z] 8069.25 IOPS, 31.52 MiB/s [2024-11-22T14:56:14.173Z] 8016.80 IOPS, 31.32 MiB/s [2024-11-22T14:56:14.173Z] 8180.50 IOPS, 31.96 MiB/s [2024-11-22T14:56:14.173Z] 8517.00 IOPS, 33.27 MiB/s [2024-11-22T14:56:14.173Z] 8791.12 IOPS, 34.34 MiB/s [2024-11-22T14:56:14.173Z] 8993.67 IOPS, 35.13 MiB/s [2024-11-22T14:56:14.173Z] 9178.30 IOPS, 35.85 MiB/s [2024-11-22T14:56:14.173Z] 9310.36 IOPS, 36.37 MiB/s [2024-11-22T14:56:14.173Z] 9421.08 IOPS, 36.80 MiB/s [2024-11-22T14:56:14.173Z] 9532.69 IOPS, 37.24 MiB/s [2024-11-22T14:56:14.173Z] 9619.79 IOPS, 37.58 MiB/s [2024-11-22T14:56:14.173Z] [2024-11-22 14:55:54.705402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.705725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.705975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.705989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.706800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.706838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.706882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.706914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.706945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.706995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.707008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.707040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.508 [2024-11-22 14:55:54.707071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.707103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.707134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.707166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.508 [2024-11-22 14:55:54.707197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.508 [2024-11-22 14:55:54.707215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.707658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.707963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.707982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.509 [2024-11-22 14:55:54.708616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.509 [2024-11-22 14:55:54.708736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.509 [2024-11-22 14:55:54.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.708769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.708805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.708819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.708852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.708889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.708920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.708954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.708968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.708986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.709558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.709970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.709983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.710001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.710015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.710033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.710046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.710065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.710091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.711267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.510 [2024-11-22 14:55:54.711293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.711318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.711333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.510 [2024-11-22 14:55:54.711353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.510 [2024-11-22 14:55:54.711367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.711933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.511 [2024-11-22 14:55:54.712573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.712964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.712983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.511 [2024-11-22 14:55:54.713472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.511 [2024-11-22 14:55:54.713495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.713783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.713983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.713997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.512 [2024-11-22 14:55:54.714344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.512 [2024-11-22 14:55:54.714642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.512 [2024-11-22 14:55:54.714670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.714910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.714930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.724855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.725028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.725061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.725093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.513 [2024-11-22 14:55:54.725125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.513 [2024-11-22 14:55:54.725418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.513 [2024-11-22 14:55:54.725445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.725481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.725967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.725986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.726220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.726748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.726761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.728776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.728816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.728852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.514 [2024-11-22 14:55:54.728874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.728903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.728923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.728989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.514 [2024-11-22 14:55:54.729528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.514 [2024-11-22 14:55:54.729547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.729979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.729997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.730527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.730970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.730997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.515 [2024-11-22 14:55:54.731342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.515 [2024-11-22 14:55:54.731671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.515 [2024-11-22 14:55:54.731690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.731965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.731984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.732618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.732964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.732992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.733011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.516 [2024-11-22 14:55:54.733445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.733491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.516 [2024-11-22 14:55:54.733519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.516 [2024-11-22 14:55:54.733538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.733822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.733867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.733915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.733942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.733973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.734609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.734961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.734980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.735007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.735026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.735073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.735100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.735119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.735155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.735175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.735221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.737312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.737427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.517 [2024-11-22 14:55:54.737479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.737527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.737573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.517 [2024-11-22 14:55:54.737619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.517 [2024-11-22 14:55:54.737646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.737957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.737976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.738723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.738770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.738824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.738870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.738918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.738951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.739019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.739067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.518 [2024-11-22 14:55:54.739122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.518 [2024-11-22 14:55:54.739808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:59.518 [2024-11-22 14:55:54.739827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.739840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.739861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.739890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.739926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.739943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.739962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.739977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.739996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.740968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.740987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.741000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.741033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.519 [2024-11-22 14:55:54.741065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.519 [2024-11-22 14:55:54.741267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:59.519 [2024-11-22 14:55:54.741286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.741677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.741969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.741987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.520 [2024-11-22 14:55:54.742234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.520 [2024-11-22 14:55:54.742573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.520 [2024-11-22 14:55:54.742592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:55:54.742606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:55:54.742625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:55:54.742638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:55:54.742956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:55:54.742980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.521 9570.47 IOPS, 37.38 MiB/s [2024-11-22T14:56:14.186Z] 8972.31 IOPS, 35.05 MiB/s [2024-11-22T14:56:14.186Z] 8444.53 IOPS, 32.99 MiB/s [2024-11-22T14:56:14.186Z] 7975.39 IOPS, 31.15 MiB/s [2024-11-22T14:56:14.186Z] 7620.95 IOPS, 29.77 MiB/s [2024-11-22T14:56:14.186Z] 7772.30 IOPS, 30.36 MiB/s [2024-11-22T14:56:14.186Z] 7904.19 IOPS, 30.88 MiB/s [2024-11-22T14:56:14.186Z] 8093.91 IOPS, 31.62 MiB/s [2024-11-22T14:56:14.186Z] 8356.83 IOPS, 32.64 MiB/s [2024-11-22T14:56:14.186Z] 8548.79 IOPS, 33.39 MiB/s [2024-11-22T14:56:14.186Z] 8704.52 IOPS, 34.00 MiB/s [2024-11-22T14:56:14.186Z] 8775.38 IOPS, 34.28 MiB/s [2024-11-22T14:56:14.186Z] 8849.37 IOPS, 34.57 MiB/s [2024-11-22T14:56:14.186Z] 8917.89 IOPS, 34.84 MiB/s [2024-11-22T14:56:14.186Z] 9131.07 IOPS, 35.67 MiB/s [2024-11-22T14:56:14.186Z] 9266.87 IOPS, 36.20 MiB/s [2024-11-22T14:56:14.186Z] 9364.74 IOPS, 36.58 MiB/s [2024-11-22T14:56:14.186Z] [2024-11-22 14:56:11.185055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.185968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.185982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.186000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.186013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.186030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.186044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.186070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.186083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.186101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.186114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.186133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.521 [2024-11-22 14:56:11.186146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.187613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.187657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.187686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:59.521 [2024-11-22 14:56:11.187732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:59.521 [2024-11-22 14:56:11.187747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:59.521 9380.94 IOPS, 36.64 MiB/s [2024-11-22T14:56:14.186Z] 9393.70 IOPS, 36.69 MiB/s [2024-11-22T14:56:14.186Z] Received shutdown signal, test time was about 33.949454 seconds 00:17:59.521 00:17:59.521 Latency(us) 00:17:59.521 [2024-11-22T14:56:14.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.521 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:59.521 Verification LBA range: start 0x0 length 0x4000 00:17:59.521 Nvme0n1 : 33.95 9402.17 36.73 0.00 0.00 13585.69 245.76 4057035.87 00:17:59.521 [2024-11-22T14:56:14.186Z] =================================================================================================================== 00:17:59.521 [2024-11-22T14:56:14.186Z] Total : 9402.17 36.73 0.00 0.00 13585.69 245.76 4057035.87 00:17:59.522 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.780 rmmod nvme_tcp 00:17:59.780 rmmod nvme_fabrics 00:17:59.780 rmmod nvme_keyring 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76698 ']' 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76698 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76698 ']' 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76698 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:59.780 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76698 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.039 killing process with pid 76698 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76698' 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76698 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76698 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.039 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:18:00.298 00:18:00.298 real 0m40.433s 00:18:00.298 user 2m9.634s 00:18:00.298 sys 0m12.093s 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.298 ************************************ 00:18:00.298 END TEST nvmf_host_multipath_status 00:18:00.298 14:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:00.298 ************************************ 00:18:00.557 14:56:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:00.557 14:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.557 14:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.557 14:56:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.557 ************************************ 00:18:00.558 START TEST nvmf_discovery_remove_ifc 00:18:00.558 ************************************ 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:00.558 * Looking for test storage... 00:18:00.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.558 --rc genhtml_branch_coverage=1 00:18:00.558 --rc genhtml_function_coverage=1 00:18:00.558 --rc genhtml_legend=1 00:18:00.558 --rc geninfo_all_blocks=1 00:18:00.558 --rc geninfo_unexecuted_blocks=1 00:18:00.558 00:18:00.558 ' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.558 --rc genhtml_branch_coverage=1 00:18:00.558 --rc genhtml_function_coverage=1 00:18:00.558 --rc genhtml_legend=1 00:18:00.558 --rc geninfo_all_blocks=1 00:18:00.558 --rc geninfo_unexecuted_blocks=1 00:18:00.558 00:18:00.558 ' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.558 --rc genhtml_branch_coverage=1 00:18:00.558 --rc genhtml_function_coverage=1 00:18:00.558 --rc genhtml_legend=1 00:18:00.558 --rc geninfo_all_blocks=1 00:18:00.558 --rc geninfo_unexecuted_blocks=1 00:18:00.558 00:18:00.558 ' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.558 --rc genhtml_branch_coverage=1 00:18:00.558 --rc genhtml_function_coverage=1 00:18:00.558 --rc genhtml_legend=1 00:18:00.558 --rc geninfo_all_blocks=1 00:18:00.558 --rc geninfo_unexecuted_blocks=1 00:18:00.558 00:18:00.558 ' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.558 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.558 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.559 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.818 Cannot find device "nvmf_init_br" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.818 Cannot find device "nvmf_init_br2" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.818 Cannot find device "nvmf_tgt_br" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.818 Cannot find device "nvmf_tgt_br2" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.818 Cannot find device "nvmf_init_br" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.818 Cannot find device "nvmf_init_br2" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.818 Cannot find device "nvmf_tgt_br" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.818 Cannot find device "nvmf_tgt_br2" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.818 Cannot find device "nvmf_br" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.818 Cannot find device "nvmf_init_if" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.818 Cannot find device "nvmf_init_if2" 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.818 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.819 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:18:01.078 00:18:01.078 --- 10.0.0.3 ping statistics --- 00:18:01.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.078 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:18:01.078 00:18:01.078 --- 10.0.0.4 ping statistics --- 00:18:01.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.078 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:01.078 00:18:01.078 --- 10.0.0.1 ping statistics --- 00:18:01.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.078 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:18:01.078 00:18:01.078 --- 10.0.0.2 ping statistics --- 00:18:01.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.078 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77598 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77598 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77598 ']' 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.078 14:56:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.078 [2024-11-22 14:56:15.682005] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:18:01.078 [2024-11-22 14:56:15.682097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.337 [2024-11-22 14:56:15.835985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.337 [2024-11-22 14:56:15.891438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.337 [2024-11-22 14:56:15.891509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.337 [2024-11-22 14:56:15.891524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.337 [2024-11-22 14:56:15.891535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.337 [2024-11-22 14:56:15.891544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.337 [2024-11-22 14:56:15.891998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.337 [2024-11-22 14:56:15.957588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.595 [2024-11-22 14:56:16.086921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.595 [2024-11-22 14:56:16.095110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:01.595 null0 00:18:01.595 [2024-11-22 14:56:16.126979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77627 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77627 /tmp/host.sock 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77627 ']' 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:01.595 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.595 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.595 [2024-11-22 14:56:16.211250] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:18:01.595 [2024-11-22 14:56:16.211341] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77627 ] 00:18:01.854 [2024-11-22 14:56:16.364146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.854 [2024-11-22 14:56:16.417889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.854 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.113 [2024-11-22 14:56:16.530836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.113 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.113 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:02.113 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.113 14:56:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.050 [2024-11-22 14:56:17.580119] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:03.050 [2024-11-22 14:56:17.580177] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:03.050 [2024-11-22 14:56:17.580199] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:03.050 [2024-11-22 14:56:17.586157] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:03.050 [2024-11-22 14:56:17.640615] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:03.050 [2024-11-22 14:56:17.641710] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd6cfc0:1 started. 00:18:03.050 [2024-11-22 14:56:17.643529] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:03.050 [2024-11-22 14:56:17.643605] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:03.050 [2024-11-22 14:56:17.643634] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:03.050 [2024-11-22 14:56:17.643652] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:03.050 [2024-11-22 14:56:17.643679] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:03.050 [2024-11-22 14:56:17.648737] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd6cfc0 was disconnected and freed. delete nvme_qpair. 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:18:03.050 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:03.310 14:56:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:04.303 14:56:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:05.249 14:56:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:06.625 14:56:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:07.561 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.562 14:56:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.562 14:56:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:07.562 14:56:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:08.498 14:56:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:08.498 [2024-11-22 14:56:23.081175] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:08.498 [2024-11-22 14:56:23.081250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.498 [2024-11-22 14:56:23.081266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.498 [2024-11-22 14:56:23.081278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.498 [2024-11-22 14:56:23.081287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.498 [2024-11-22 14:56:23.081298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.498 [2024-11-22 14:56:23.081307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.498 [2024-11-22 14:56:23.081316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.498 [2024-11-22 14:56:23.081324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.498 [2024-11-22 14:56:23.081333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:08.498 [2024-11-22 14:56:23.081342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.498 [2024-11-22 14:56:23.081351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49240 is same with the state(6) to be set 00:18:08.498 [2024-11-22 14:56:23.091177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49240 (9): Bad file descriptor 00:18:08.498 [2024-11-22 14:56:23.101198] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:08.498 [2024-11-22 14:56:23.101236] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:08.498 [2024-11-22 14:56:23.101243] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:08.498 [2024-11-22 14:56:23.101248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:08.498 [2024-11-22 14:56:23.101299] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.435 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.694 [2024-11-22 14:56:24.165490] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:18:09.694 [2024-11-22 14:56:24.165606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd49240 with addr=10.0.0.3, port=4420 00:18:09.694 [2024-11-22 14:56:24.165638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd49240 is same with the state(6) to be set 00:18:09.694 [2024-11-22 14:56:24.165696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49240 (9): Bad file descriptor 00:18:09.694 [2024-11-22 14:56:24.166636] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:09.694 [2024-11-22 14:56:24.166735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:09.694 [2024-11-22 14:56:24.166761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:09.694 [2024-11-22 14:56:24.166787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:09.694 [2024-11-22 14:56:24.166808] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:09.694 [2024-11-22 14:56:24.166822] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:09.694 [2024-11-22 14:56:24.166833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:09.694 [2024-11-22 14:56:24.166854] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:09.694 [2024-11-22 14:56:24.166866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:09.694 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.694 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:09.694 14:56:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:10.630 [2024-11-22 14:56:25.166951] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:10.630 [2024-11-22 14:56:25.166999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:10.630 [2024-11-22 14:56:25.167024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:10.631 [2024-11-22 14:56:25.167049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:10.631 [2024-11-22 14:56:25.167058] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:10.631 [2024-11-22 14:56:25.167067] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:10.631 [2024-11-22 14:56:25.167073] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:10.631 [2024-11-22 14:56:25.167078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:10.631 [2024-11-22 14:56:25.167110] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:10.631 [2024-11-22 14:56:25.167152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.631 [2024-11-22 14:56:25.167166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.631 [2024-11-22 14:56:25.167178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.631 [2024-11-22 14:56:25.167187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.631 [2024-11-22 14:56:25.167195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.631 [2024-11-22 14:56:25.167203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.631 [2024-11-22 14:56:25.167211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.631 [2024-11-22 14:56:25.167219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.631 [2024-11-22 14:56:25.167228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:10.631 [2024-11-22 14:56:25.167235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.631 [2024-11-22 14:56:25.167243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:10.631 [2024-11-22 14:56:25.167274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4a20 (9): Bad file descriptor 00:18:10.631 [2024-11-22 14:56:25.168014] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:10.631 [2024-11-22 14:56:25.168055] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.631 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.889 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:10.889 14:56:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.827 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.828 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.828 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:11.828 14:56:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:12.765 [2024-11-22 14:56:27.173982] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:12.765 [2024-11-22 14:56:27.174024] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:12.765 [2024-11-22 14:56:27.174075] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:12.765 [2024-11-22 14:56:27.180020] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:12.765 [2024-11-22 14:56:27.234390] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:12.765 [2024-11-22 14:56:27.235204] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd75290:1 started. 00:18:12.765 [2024-11-22 14:56:27.236664] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:12.765 [2024-11-22 14:56:27.236726] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:12.765 [2024-11-22 14:56:27.236763] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:12.765 [2024-11-22 14:56:27.236778] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:12.765 [2024-11-22 14:56:27.236786] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:12.766 [2024-11-22 14:56:27.242362] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd75290 was disconnected and freed. delete nvme_qpair. 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.766 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77627 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77627 ']' 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77627 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77627 00:18:13.025 killing process with pid 77627 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77627' 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77627 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77627 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.025 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.284 rmmod nvme_tcp 00:18:13.284 rmmod nvme_fabrics 00:18:13.284 rmmod nvme_keyring 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77598 ']' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77598 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77598 ']' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77598 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77598 00:18:13.284 killing process with pid 77598 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77598' 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77598 00:18:13.284 14:56:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77598 00:18:13.543 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.543 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.544 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:13.804 00:18:13.804 real 0m13.328s 00:18:13.804 user 0m22.350s 00:18:13.804 sys 0m2.614s 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.804 ************************************ 00:18:13.804 END TEST nvmf_discovery_remove_ifc 00:18:13.804 ************************************ 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.804 ************************************ 00:18:13.804 START TEST nvmf_identify_kernel_target 00:18:13.804 ************************************ 00:18:13.804 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:13.804 * Looking for test storage... 00:18:14.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.066 --rc genhtml_branch_coverage=1 00:18:14.066 --rc genhtml_function_coverage=1 00:18:14.066 --rc genhtml_legend=1 00:18:14.066 --rc geninfo_all_blocks=1 00:18:14.066 --rc geninfo_unexecuted_blocks=1 00:18:14.066 00:18:14.066 ' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.066 --rc genhtml_branch_coverage=1 00:18:14.066 --rc genhtml_function_coverage=1 00:18:14.066 --rc genhtml_legend=1 00:18:14.066 --rc geninfo_all_blocks=1 00:18:14.066 --rc geninfo_unexecuted_blocks=1 00:18:14.066 00:18:14.066 ' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.066 --rc genhtml_branch_coverage=1 00:18:14.066 --rc genhtml_function_coverage=1 00:18:14.066 --rc genhtml_legend=1 00:18:14.066 --rc geninfo_all_blocks=1 00:18:14.066 --rc geninfo_unexecuted_blocks=1 00:18:14.066 00:18:14.066 ' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.066 --rc genhtml_branch_coverage=1 00:18:14.066 --rc genhtml_function_coverage=1 00:18:14.066 --rc genhtml_legend=1 00:18:14.066 --rc geninfo_all_blocks=1 00:18:14.066 --rc geninfo_unexecuted_blocks=1 00:18:14.066 00:18:14.066 ' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.066 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.067 Cannot find device "nvmf_init_br" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.067 Cannot find device "nvmf_init_br2" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.067 Cannot find device "nvmf_tgt_br" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.067 Cannot find device "nvmf_tgt_br2" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.067 Cannot find device "nvmf_init_br" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.067 Cannot find device "nvmf_init_br2" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.067 Cannot find device "nvmf_tgt_br" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.067 Cannot find device "nvmf_tgt_br2" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.067 Cannot find device "nvmf_br" 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:14.067 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.326 Cannot find device "nvmf_init_if" 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.326 Cannot find device "nvmf_init_if2" 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.326 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.327 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:18:14.586 00:18:14.586 --- 10.0.0.3 ping statistics --- 00:18:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.586 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:14.586 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:18:14.586 00:18:14.586 --- 10.0.0.4 ping statistics --- 00:18:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.586 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:14.586 14:56:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:14.586 00:18:14.586 --- 10.0.0.1 ping statistics --- 00:18:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.586 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:18:14.586 00:18:14.586 --- 10.0.0.2 ping statistics --- 00:18:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.586 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:14.586 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:14.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.845 Waiting for block devices as requested 00:18:14.845 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:15.104 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:15.104 No valid GPT data, bailing 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:15.104 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:15.363 No valid GPT data, bailing 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:15.363 No valid GPT data, bailing 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:15.363 No valid GPT data, bailing 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:15.363 14:56:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -a 10.0.0.1 -t tcp -s 4420 00:18:15.363 00:18:15.363 Discovery Log Number of Records 2, Generation counter 2 00:18:15.363 =====Discovery Log Entry 0====== 00:18:15.363 trtype: tcp 00:18:15.363 adrfam: ipv4 00:18:15.363 subtype: current discovery subsystem 00:18:15.363 treq: not specified, sq flow control disable supported 00:18:15.363 portid: 1 00:18:15.363 trsvcid: 4420 00:18:15.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:15.363 traddr: 10.0.0.1 00:18:15.363 eflags: none 00:18:15.363 sectype: none 00:18:15.363 =====Discovery Log Entry 1====== 00:18:15.363 trtype: tcp 00:18:15.363 adrfam: ipv4 00:18:15.363 subtype: nvme subsystem 00:18:15.363 treq: not specified, sq flow control disable supported 00:18:15.363 portid: 1 00:18:15.363 trsvcid: 4420 00:18:15.363 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:15.363 traddr: 10.0.0.1 00:18:15.363 eflags: none 00:18:15.363 sectype: none 00:18:15.363 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:15.363 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:15.622 ===================================================== 00:18:15.622 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:15.622 ===================================================== 00:18:15.622 Controller Capabilities/Features 00:18:15.622 ================================ 00:18:15.622 Vendor ID: 0000 00:18:15.622 Subsystem Vendor ID: 0000 00:18:15.622 Serial Number: 187c1e6b7e2e2adeaa5d 00:18:15.622 Model Number: Linux 00:18:15.622 Firmware Version: 6.8.9-20 00:18:15.622 Recommended Arb Burst: 0 00:18:15.622 IEEE OUI Identifier: 00 00 00 00:18:15.622 Multi-path I/O 00:18:15.622 May have multiple subsystem ports: No 00:18:15.622 May have multiple controllers: No 00:18:15.622 Associated with SR-IOV VF: No 00:18:15.622 Max Data Transfer Size: Unlimited 00:18:15.622 Max Number of Namespaces: 0 00:18:15.622 Max Number of I/O Queues: 1024 00:18:15.622 NVMe Specification Version (VS): 1.3 00:18:15.622 NVMe Specification Version (Identify): 1.3 00:18:15.622 Maximum Queue Entries: 1024 00:18:15.622 Contiguous Queues Required: No 00:18:15.622 Arbitration Mechanisms Supported 00:18:15.622 Weighted Round Robin: Not Supported 00:18:15.622 Vendor Specific: Not Supported 00:18:15.622 Reset Timeout: 7500 ms 00:18:15.622 Doorbell Stride: 4 bytes 00:18:15.622 NVM Subsystem Reset: Not Supported 00:18:15.622 Command Sets Supported 00:18:15.622 NVM Command Set: Supported 00:18:15.622 Boot Partition: Not Supported 00:18:15.622 Memory Page Size Minimum: 4096 bytes 00:18:15.622 Memory Page Size Maximum: 4096 bytes 00:18:15.622 Persistent Memory Region: Not Supported 00:18:15.622 Optional Asynchronous Events Supported 00:18:15.622 Namespace Attribute Notices: Not Supported 00:18:15.622 Firmware Activation Notices: Not Supported 00:18:15.622 ANA Change Notices: Not Supported 00:18:15.622 PLE Aggregate Log Change Notices: Not Supported 00:18:15.622 LBA Status Info Alert Notices: Not Supported 00:18:15.622 EGE Aggregate Log Change Notices: Not Supported 00:18:15.622 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.622 Zone Descriptor Change Notices: Not Supported 00:18:15.622 Discovery Log Change Notices: Supported 00:18:15.622 Controller Attributes 00:18:15.622 128-bit Host Identifier: Not Supported 00:18:15.622 Non-Operational Permissive Mode: Not Supported 00:18:15.622 NVM Sets: Not Supported 00:18:15.622 Read Recovery Levels: Not Supported 00:18:15.622 Endurance Groups: Not Supported 00:18:15.622 Predictable Latency Mode: Not Supported 00:18:15.622 Traffic Based Keep ALive: Not Supported 00:18:15.622 Namespace Granularity: Not Supported 00:18:15.622 SQ Associations: Not Supported 00:18:15.622 UUID List: Not Supported 00:18:15.622 Multi-Domain Subsystem: Not Supported 00:18:15.622 Fixed Capacity Management: Not Supported 00:18:15.623 Variable Capacity Management: Not Supported 00:18:15.623 Delete Endurance Group: Not Supported 00:18:15.623 Delete NVM Set: Not Supported 00:18:15.623 Extended LBA Formats Supported: Not Supported 00:18:15.623 Flexible Data Placement Supported: Not Supported 00:18:15.623 00:18:15.623 Controller Memory Buffer Support 00:18:15.623 ================================ 00:18:15.623 Supported: No 00:18:15.623 00:18:15.623 Persistent Memory Region Support 00:18:15.623 ================================ 00:18:15.623 Supported: No 00:18:15.623 00:18:15.623 Admin Command Set Attributes 00:18:15.623 ============================ 00:18:15.623 Security Send/Receive: Not Supported 00:18:15.623 Format NVM: Not Supported 00:18:15.623 Firmware Activate/Download: Not Supported 00:18:15.623 Namespace Management: Not Supported 00:18:15.623 Device Self-Test: Not Supported 00:18:15.623 Directives: Not Supported 00:18:15.623 NVMe-MI: Not Supported 00:18:15.623 Virtualization Management: Not Supported 00:18:15.623 Doorbell Buffer Config: Not Supported 00:18:15.623 Get LBA Status Capability: Not Supported 00:18:15.623 Command & Feature Lockdown Capability: Not Supported 00:18:15.623 Abort Command Limit: 1 00:18:15.623 Async Event Request Limit: 1 00:18:15.623 Number of Firmware Slots: N/A 00:18:15.623 Firmware Slot 1 Read-Only: N/A 00:18:15.623 Firmware Activation Without Reset: N/A 00:18:15.623 Multiple Update Detection Support: N/A 00:18:15.623 Firmware Update Granularity: No Information Provided 00:18:15.623 Per-Namespace SMART Log: No 00:18:15.623 Asymmetric Namespace Access Log Page: Not Supported 00:18:15.623 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:15.623 Command Effects Log Page: Not Supported 00:18:15.623 Get Log Page Extended Data: Supported 00:18:15.623 Telemetry Log Pages: Not Supported 00:18:15.623 Persistent Event Log Pages: Not Supported 00:18:15.623 Supported Log Pages Log Page: May Support 00:18:15.623 Commands Supported & Effects Log Page: Not Supported 00:18:15.623 Feature Identifiers & Effects Log Page:May Support 00:18:15.623 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.623 Data Area 4 for Telemetry Log: Not Supported 00:18:15.623 Error Log Page Entries Supported: 1 00:18:15.623 Keep Alive: Not Supported 00:18:15.623 00:18:15.623 NVM Command Set Attributes 00:18:15.623 ========================== 00:18:15.623 Submission Queue Entry Size 00:18:15.623 Max: 1 00:18:15.623 Min: 1 00:18:15.623 Completion Queue Entry Size 00:18:15.623 Max: 1 00:18:15.623 Min: 1 00:18:15.623 Number of Namespaces: 0 00:18:15.623 Compare Command: Not Supported 00:18:15.623 Write Uncorrectable Command: Not Supported 00:18:15.623 Dataset Management Command: Not Supported 00:18:15.623 Write Zeroes Command: Not Supported 00:18:15.623 Set Features Save Field: Not Supported 00:18:15.623 Reservations: Not Supported 00:18:15.623 Timestamp: Not Supported 00:18:15.623 Copy: Not Supported 00:18:15.623 Volatile Write Cache: Not Present 00:18:15.623 Atomic Write Unit (Normal): 1 00:18:15.623 Atomic Write Unit (PFail): 1 00:18:15.623 Atomic Compare & Write Unit: 1 00:18:15.623 Fused Compare & Write: Not Supported 00:18:15.623 Scatter-Gather List 00:18:15.623 SGL Command Set: Supported 00:18:15.623 SGL Keyed: Not Supported 00:18:15.623 SGL Bit Bucket Descriptor: Not Supported 00:18:15.623 SGL Metadata Pointer: Not Supported 00:18:15.623 Oversized SGL: Not Supported 00:18:15.623 SGL Metadata Address: Not Supported 00:18:15.623 SGL Offset: Supported 00:18:15.623 Transport SGL Data Block: Not Supported 00:18:15.623 Replay Protected Memory Block: Not Supported 00:18:15.623 00:18:15.623 Firmware Slot Information 00:18:15.623 ========================= 00:18:15.623 Active slot: 0 00:18:15.623 00:18:15.623 00:18:15.623 Error Log 00:18:15.623 ========= 00:18:15.623 00:18:15.623 Active Namespaces 00:18:15.623 ================= 00:18:15.623 Discovery Log Page 00:18:15.623 ================== 00:18:15.623 Generation Counter: 2 00:18:15.623 Number of Records: 2 00:18:15.623 Record Format: 0 00:18:15.623 00:18:15.623 Discovery Log Entry 0 00:18:15.623 ---------------------- 00:18:15.623 Transport Type: 3 (TCP) 00:18:15.623 Address Family: 1 (IPv4) 00:18:15.623 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:15.623 Entry Flags: 00:18:15.623 Duplicate Returned Information: 0 00:18:15.623 Explicit Persistent Connection Support for Discovery: 0 00:18:15.623 Transport Requirements: 00:18:15.623 Secure Channel: Not Specified 00:18:15.623 Port ID: 1 (0x0001) 00:18:15.623 Controller ID: 65535 (0xffff) 00:18:15.623 Admin Max SQ Size: 32 00:18:15.623 Transport Service Identifier: 4420 00:18:15.623 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:15.623 Transport Address: 10.0.0.1 00:18:15.623 Discovery Log Entry 1 00:18:15.623 ---------------------- 00:18:15.623 Transport Type: 3 (TCP) 00:18:15.623 Address Family: 1 (IPv4) 00:18:15.623 Subsystem Type: 2 (NVM Subsystem) 00:18:15.623 Entry Flags: 00:18:15.623 Duplicate Returned Information: 0 00:18:15.623 Explicit Persistent Connection Support for Discovery: 0 00:18:15.623 Transport Requirements: 00:18:15.623 Secure Channel: Not Specified 00:18:15.623 Port ID: 1 (0x0001) 00:18:15.623 Controller ID: 65535 (0xffff) 00:18:15.623 Admin Max SQ Size: 32 00:18:15.623 Transport Service Identifier: 4420 00:18:15.623 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:15.623 Transport Address: 10.0.0.1 00:18:15.623 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:15.883 get_feature(0x01) failed 00:18:15.883 get_feature(0x02) failed 00:18:15.883 get_feature(0x04) failed 00:18:15.883 ===================================================== 00:18:15.883 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:15.883 ===================================================== 00:18:15.883 Controller Capabilities/Features 00:18:15.883 ================================ 00:18:15.883 Vendor ID: 0000 00:18:15.883 Subsystem Vendor ID: 0000 00:18:15.883 Serial Number: 8cf981cccf5df77f3c2a 00:18:15.883 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:15.884 Firmware Version: 6.8.9-20 00:18:15.884 Recommended Arb Burst: 6 00:18:15.884 IEEE OUI Identifier: 00 00 00 00:18:15.884 Multi-path I/O 00:18:15.884 May have multiple subsystem ports: Yes 00:18:15.884 May have multiple controllers: Yes 00:18:15.884 Associated with SR-IOV VF: No 00:18:15.884 Max Data Transfer Size: Unlimited 00:18:15.884 Max Number of Namespaces: 1024 00:18:15.884 Max Number of I/O Queues: 128 00:18:15.884 NVMe Specification Version (VS): 1.3 00:18:15.884 NVMe Specification Version (Identify): 1.3 00:18:15.884 Maximum Queue Entries: 1024 00:18:15.884 Contiguous Queues Required: No 00:18:15.884 Arbitration Mechanisms Supported 00:18:15.884 Weighted Round Robin: Not Supported 00:18:15.884 Vendor Specific: Not Supported 00:18:15.884 Reset Timeout: 7500 ms 00:18:15.884 Doorbell Stride: 4 bytes 00:18:15.884 NVM Subsystem Reset: Not Supported 00:18:15.884 Command Sets Supported 00:18:15.884 NVM Command Set: Supported 00:18:15.884 Boot Partition: Not Supported 00:18:15.884 Memory Page Size Minimum: 4096 bytes 00:18:15.884 Memory Page Size Maximum: 4096 bytes 00:18:15.884 Persistent Memory Region: Not Supported 00:18:15.884 Optional Asynchronous Events Supported 00:18:15.884 Namespace Attribute Notices: Supported 00:18:15.884 Firmware Activation Notices: Not Supported 00:18:15.884 ANA Change Notices: Supported 00:18:15.884 PLE Aggregate Log Change Notices: Not Supported 00:18:15.884 LBA Status Info Alert Notices: Not Supported 00:18:15.884 EGE Aggregate Log Change Notices: Not Supported 00:18:15.884 Normal NVM Subsystem Shutdown event: Not Supported 00:18:15.884 Zone Descriptor Change Notices: Not Supported 00:18:15.884 Discovery Log Change Notices: Not Supported 00:18:15.884 Controller Attributes 00:18:15.884 128-bit Host Identifier: Supported 00:18:15.884 Non-Operational Permissive Mode: Not Supported 00:18:15.884 NVM Sets: Not Supported 00:18:15.884 Read Recovery Levels: Not Supported 00:18:15.884 Endurance Groups: Not Supported 00:18:15.884 Predictable Latency Mode: Not Supported 00:18:15.884 Traffic Based Keep ALive: Supported 00:18:15.884 Namespace Granularity: Not Supported 00:18:15.884 SQ Associations: Not Supported 00:18:15.884 UUID List: Not Supported 00:18:15.884 Multi-Domain Subsystem: Not Supported 00:18:15.884 Fixed Capacity Management: Not Supported 00:18:15.884 Variable Capacity Management: Not Supported 00:18:15.884 Delete Endurance Group: Not Supported 00:18:15.884 Delete NVM Set: Not Supported 00:18:15.884 Extended LBA Formats Supported: Not Supported 00:18:15.884 Flexible Data Placement Supported: Not Supported 00:18:15.884 00:18:15.884 Controller Memory Buffer Support 00:18:15.884 ================================ 00:18:15.884 Supported: No 00:18:15.884 00:18:15.884 Persistent Memory Region Support 00:18:15.884 ================================ 00:18:15.884 Supported: No 00:18:15.884 00:18:15.884 Admin Command Set Attributes 00:18:15.884 ============================ 00:18:15.884 Security Send/Receive: Not Supported 00:18:15.884 Format NVM: Not Supported 00:18:15.884 Firmware Activate/Download: Not Supported 00:18:15.884 Namespace Management: Not Supported 00:18:15.884 Device Self-Test: Not Supported 00:18:15.884 Directives: Not Supported 00:18:15.884 NVMe-MI: Not Supported 00:18:15.884 Virtualization Management: Not Supported 00:18:15.884 Doorbell Buffer Config: Not Supported 00:18:15.884 Get LBA Status Capability: Not Supported 00:18:15.884 Command & Feature Lockdown Capability: Not Supported 00:18:15.884 Abort Command Limit: 4 00:18:15.884 Async Event Request Limit: 4 00:18:15.884 Number of Firmware Slots: N/A 00:18:15.884 Firmware Slot 1 Read-Only: N/A 00:18:15.884 Firmware Activation Without Reset: N/A 00:18:15.884 Multiple Update Detection Support: N/A 00:18:15.884 Firmware Update Granularity: No Information Provided 00:18:15.884 Per-Namespace SMART Log: Yes 00:18:15.884 Asymmetric Namespace Access Log Page: Supported 00:18:15.884 ANA Transition Time : 10 sec 00:18:15.884 00:18:15.884 Asymmetric Namespace Access Capabilities 00:18:15.884 ANA Optimized State : Supported 00:18:15.884 ANA Non-Optimized State : Supported 00:18:15.884 ANA Inaccessible State : Supported 00:18:15.884 ANA Persistent Loss State : Supported 00:18:15.884 ANA Change State : Supported 00:18:15.884 ANAGRPID is not changed : No 00:18:15.884 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:15.884 00:18:15.884 ANA Group Identifier Maximum : 128 00:18:15.884 Number of ANA Group Identifiers : 128 00:18:15.884 Max Number of Allowed Namespaces : 1024 00:18:15.884 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:15.884 Command Effects Log Page: Supported 00:18:15.884 Get Log Page Extended Data: Supported 00:18:15.884 Telemetry Log Pages: Not Supported 00:18:15.884 Persistent Event Log Pages: Not Supported 00:18:15.884 Supported Log Pages Log Page: May Support 00:18:15.884 Commands Supported & Effects Log Page: Not Supported 00:18:15.884 Feature Identifiers & Effects Log Page:May Support 00:18:15.884 NVMe-MI Commands & Effects Log Page: May Support 00:18:15.884 Data Area 4 for Telemetry Log: Not Supported 00:18:15.884 Error Log Page Entries Supported: 128 00:18:15.884 Keep Alive: Supported 00:18:15.884 Keep Alive Granularity: 1000 ms 00:18:15.884 00:18:15.884 NVM Command Set Attributes 00:18:15.884 ========================== 00:18:15.884 Submission Queue Entry Size 00:18:15.884 Max: 64 00:18:15.884 Min: 64 00:18:15.884 Completion Queue Entry Size 00:18:15.884 Max: 16 00:18:15.884 Min: 16 00:18:15.884 Number of Namespaces: 1024 00:18:15.884 Compare Command: Not Supported 00:18:15.884 Write Uncorrectable Command: Not Supported 00:18:15.884 Dataset Management Command: Supported 00:18:15.884 Write Zeroes Command: Supported 00:18:15.884 Set Features Save Field: Not Supported 00:18:15.884 Reservations: Not Supported 00:18:15.884 Timestamp: Not Supported 00:18:15.884 Copy: Not Supported 00:18:15.884 Volatile Write Cache: Present 00:18:15.884 Atomic Write Unit (Normal): 1 00:18:15.884 Atomic Write Unit (PFail): 1 00:18:15.884 Atomic Compare & Write Unit: 1 00:18:15.884 Fused Compare & Write: Not Supported 00:18:15.884 Scatter-Gather List 00:18:15.884 SGL Command Set: Supported 00:18:15.884 SGL Keyed: Not Supported 00:18:15.884 SGL Bit Bucket Descriptor: Not Supported 00:18:15.884 SGL Metadata Pointer: Not Supported 00:18:15.884 Oversized SGL: Not Supported 00:18:15.884 SGL Metadata Address: Not Supported 00:18:15.884 SGL Offset: Supported 00:18:15.884 Transport SGL Data Block: Not Supported 00:18:15.884 Replay Protected Memory Block: Not Supported 00:18:15.884 00:18:15.884 Firmware Slot Information 00:18:15.884 ========================= 00:18:15.884 Active slot: 0 00:18:15.884 00:18:15.884 Asymmetric Namespace Access 00:18:15.884 =========================== 00:18:15.884 Change Count : 0 00:18:15.884 Number of ANA Group Descriptors : 1 00:18:15.884 ANA Group Descriptor : 0 00:18:15.884 ANA Group ID : 1 00:18:15.884 Number of NSID Values : 1 00:18:15.884 Change Count : 0 00:18:15.884 ANA State : 1 00:18:15.884 Namespace Identifier : 1 00:18:15.884 00:18:15.884 Commands Supported and Effects 00:18:15.884 ============================== 00:18:15.884 Admin Commands 00:18:15.884 -------------- 00:18:15.884 Get Log Page (02h): Supported 00:18:15.884 Identify (06h): Supported 00:18:15.884 Abort (08h): Supported 00:18:15.884 Set Features (09h): Supported 00:18:15.884 Get Features (0Ah): Supported 00:18:15.884 Asynchronous Event Request (0Ch): Supported 00:18:15.884 Keep Alive (18h): Supported 00:18:15.884 I/O Commands 00:18:15.884 ------------ 00:18:15.884 Flush (00h): Supported 00:18:15.884 Write (01h): Supported LBA-Change 00:18:15.884 Read (02h): Supported 00:18:15.884 Write Zeroes (08h): Supported LBA-Change 00:18:15.884 Dataset Management (09h): Supported 00:18:15.884 00:18:15.884 Error Log 00:18:15.884 ========= 00:18:15.884 Entry: 0 00:18:15.884 Error Count: 0x3 00:18:15.884 Submission Queue Id: 0x0 00:18:15.884 Command Id: 0x5 00:18:15.884 Phase Bit: 0 00:18:15.884 Status Code: 0x2 00:18:15.884 Status Code Type: 0x0 00:18:15.884 Do Not Retry: 1 00:18:15.884 Error Location: 0x28 00:18:15.884 LBA: 0x0 00:18:15.884 Namespace: 0x0 00:18:15.884 Vendor Log Page: 0x0 00:18:15.884 ----------- 00:18:15.884 Entry: 1 00:18:15.884 Error Count: 0x2 00:18:15.884 Submission Queue Id: 0x0 00:18:15.884 Command Id: 0x5 00:18:15.884 Phase Bit: 0 00:18:15.884 Status Code: 0x2 00:18:15.884 Status Code Type: 0x0 00:18:15.884 Do Not Retry: 1 00:18:15.884 Error Location: 0x28 00:18:15.884 LBA: 0x0 00:18:15.884 Namespace: 0x0 00:18:15.885 Vendor Log Page: 0x0 00:18:15.885 ----------- 00:18:15.885 Entry: 2 00:18:15.885 Error Count: 0x1 00:18:15.885 Submission Queue Id: 0x0 00:18:15.885 Command Id: 0x4 00:18:15.885 Phase Bit: 0 00:18:15.885 Status Code: 0x2 00:18:15.885 Status Code Type: 0x0 00:18:15.885 Do Not Retry: 1 00:18:15.885 Error Location: 0x28 00:18:15.885 LBA: 0x0 00:18:15.885 Namespace: 0x0 00:18:15.885 Vendor Log Page: 0x0 00:18:15.885 00:18:15.885 Number of Queues 00:18:15.885 ================ 00:18:15.885 Number of I/O Submission Queues: 128 00:18:15.885 Number of I/O Completion Queues: 128 00:18:15.885 00:18:15.885 ZNS Specific Controller Data 00:18:15.885 ============================ 00:18:15.885 Zone Append Size Limit: 0 00:18:15.885 00:18:15.885 00:18:15.885 Active Namespaces 00:18:15.885 ================= 00:18:15.885 get_feature(0x05) failed 00:18:15.885 Namespace ID:1 00:18:15.885 Command Set Identifier: NVM (00h) 00:18:15.885 Deallocate: Supported 00:18:15.885 Deallocated/Unwritten Error: Not Supported 00:18:15.885 Deallocated Read Value: Unknown 00:18:15.885 Deallocate in Write Zeroes: Not Supported 00:18:15.885 Deallocated Guard Field: 0xFFFF 00:18:15.885 Flush: Supported 00:18:15.885 Reservation: Not Supported 00:18:15.885 Namespace Sharing Capabilities: Multiple Controllers 00:18:15.885 Size (in LBAs): 1310720 (5GiB) 00:18:15.885 Capacity (in LBAs): 1310720 (5GiB) 00:18:15.885 Utilization (in LBAs): 1310720 (5GiB) 00:18:15.885 UUID: d0d3c0d1-aa7f-48c2-8fe5-d44b9d22777b 00:18:15.885 Thin Provisioning: Not Supported 00:18:15.885 Per-NS Atomic Units: Yes 00:18:15.885 Atomic Boundary Size (Normal): 0 00:18:15.885 Atomic Boundary Size (PFail): 0 00:18:15.885 Atomic Boundary Offset: 0 00:18:15.885 NGUID/EUI64 Never Reused: No 00:18:15.885 ANA group ID: 1 00:18:15.885 Namespace Write Protected: No 00:18:15.885 Number of LBA Formats: 1 00:18:15.885 Current LBA Format: LBA Format #00 00:18:15.885 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:15.885 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.885 rmmod nvme_tcp 00:18:15.885 rmmod nvme_fabrics 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.885 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:16.144 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:16.403 14:56:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:16.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.970 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:17.228 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:17.228 ************************************ 00:18:17.228 END TEST nvmf_identify_kernel_target 00:18:17.228 ************************************ 00:18:17.228 00:18:17.228 real 0m3.326s 00:18:17.228 user 0m1.146s 00:18:17.228 sys 0m1.515s 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.228 ************************************ 00:18:17.228 START TEST nvmf_auth_host 00:18:17.228 ************************************ 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:17.228 * Looking for test storage... 00:18:17.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:17.228 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.489 --rc genhtml_branch_coverage=1 00:18:17.489 --rc genhtml_function_coverage=1 00:18:17.489 --rc genhtml_legend=1 00:18:17.489 --rc geninfo_all_blocks=1 00:18:17.489 --rc geninfo_unexecuted_blocks=1 00:18:17.489 00:18:17.489 ' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.489 --rc genhtml_branch_coverage=1 00:18:17.489 --rc genhtml_function_coverage=1 00:18:17.489 --rc genhtml_legend=1 00:18:17.489 --rc geninfo_all_blocks=1 00:18:17.489 --rc geninfo_unexecuted_blocks=1 00:18:17.489 00:18:17.489 ' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.489 --rc genhtml_branch_coverage=1 00:18:17.489 --rc genhtml_function_coverage=1 00:18:17.489 --rc genhtml_legend=1 00:18:17.489 --rc geninfo_all_blocks=1 00:18:17.489 --rc geninfo_unexecuted_blocks=1 00:18:17.489 00:18:17.489 ' 00:18:17.489 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.490 --rc genhtml_branch_coverage=1 00:18:17.490 --rc genhtml_function_coverage=1 00:18:17.490 --rc genhtml_legend=1 00:18:17.490 --rc geninfo_all_blocks=1 00:18:17.490 --rc geninfo_unexecuted_blocks=1 00:18:17.490 00:18:17.490 ' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:17.490 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.490 14:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:17.490 Cannot find device "nvmf_init_br" 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:17.490 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:17.490 Cannot find device "nvmf_init_br2" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:17.491 Cannot find device "nvmf_tgt_br" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.491 Cannot find device "nvmf_tgt_br2" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:17.491 Cannot find device "nvmf_init_br" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:17.491 Cannot find device "nvmf_init_br2" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:17.491 Cannot find device "nvmf_tgt_br" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:17.491 Cannot find device "nvmf_tgt_br2" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:17.491 Cannot find device "nvmf_br" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:17.491 Cannot find device "nvmf_init_if" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:17.491 Cannot find device "nvmf_init_if2" 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:17.491 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:17.750 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:17.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:17.751 00:18:17.751 --- 10.0.0.3 ping statistics --- 00:18:17.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.751 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:17.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:17.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.118 ms 00:18:17.751 00:18:17.751 --- 10.0.0.4 ping statistics --- 00:18:17.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.751 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:17.751 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:18:18.009 00:18:18.009 --- 10.0.0.1 ping statistics --- 00:18:18.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.009 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:18.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:18.009 00:18:18.009 --- 10.0.0.2 ping statistics --- 00:18:18.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.009 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78609 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78609 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78609 ']' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.009 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=889e65804d4edf49661819b8b1add10f 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oat 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 889e65804d4edf49661819b8b1add10f 0 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 889e65804d4edf49661819b8b1add10f 0 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=889e65804d4edf49661819b8b1add10f 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.266 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oat 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oat 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.oat 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0bb078708a3ed57a5bb1a91fcf2d9ac4b5f185f506cf29ceeb898869ed4a70ad 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GIs 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0bb078708a3ed57a5bb1a91fcf2d9ac4b5f185f506cf29ceeb898869ed4a70ad 3 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0bb078708a3ed57a5bb1a91fcf2d9ac4b5f185f506cf29ceeb898869ed4a70ad 3 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0bb078708a3ed57a5bb1a91fcf2d9ac4b5f185f506cf29ceeb898869ed4a70ad 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:18.523 14:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GIs 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GIs 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GIs 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c9b3c8234b4968e24adfcd60a9030d8c0f97f42d3d4d1529 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.67v 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c9b3c8234b4968e24adfcd60a9030d8c0f97f42d3d4d1529 0 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c9b3c8234b4968e24adfcd60a9030d8c0f97f42d3d4d1529 0 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c9b3c8234b4968e24adfcd60a9030d8c0f97f42d3d4d1529 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.67v 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.67v 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.67v 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e553c59b0123f0b36a916b4c03b8fb72ce810b5a4c559f8d 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7Wv 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e553c59b0123f0b36a916b4c03b8fb72ce810b5a4c559f8d 2 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e553c59b0123f0b36a916b4c03b8fb72ce810b5a4c559f8d 2 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e553c59b0123f0b36a916b4c03b8fb72ce810b5a4c559f8d 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7Wv 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7Wv 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Wv 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=680b2e2189a8007e09634e4aa433a8a9 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VcQ 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 680b2e2189a8007e09634e4aa433a8a9 1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 680b2e2189a8007e09634e4aa433a8a9 1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=680b2e2189a8007e09634e4aa433a8a9 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:18.524 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VcQ 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VcQ 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VcQ 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=95b0e15bf799fc575badd227b26dc3da 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4Px 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 95b0e15bf799fc575badd227b26dc3da 1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 95b0e15bf799fc575badd227b26dc3da 1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=95b0e15bf799fc575badd227b26dc3da 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4Px 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4Px 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4Px 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=372b1becef635e934727f34c91c97d2430e1a6788c2f5af8 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Tuh 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 372b1becef635e934727f34c91c97d2430e1a6788c2f5af8 2 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 372b1becef635e934727f34c91c97d2430e1a6788c2f5af8 2 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=372b1becef635e934727f34c91c97d2430e1a6788c2f5af8 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Tuh 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Tuh 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Tuh 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33cd614711d97c4004d18029f8ca0209 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0jM 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33cd614711d97c4004d18029f8ca0209 0 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33cd614711d97c4004d18029f8ca0209 0 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33cd614711d97c4004d18029f8ca0209 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0jM 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0jM 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.0jM 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4050ea283d11a5338c5789c26a908190adec2a73081b99354ae50ff38dfecb7e 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Mom 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4050ea283d11a5338c5789c26a908190adec2a73081b99354ae50ff38dfecb7e 3 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4050ea283d11a5338c5789c26a908190adec2a73081b99354ae50ff38dfecb7e 3 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4050ea283d11a5338c5789c26a908190adec2a73081b99354ae50ff38dfecb7e 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:18.783 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Mom 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Mom 00:18:19.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Mom 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78609 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78609 ']' 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.042 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oat 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GIs ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GIs 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.67v 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Wv ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Wv 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VcQ 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4Px ]] 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Px 00:18:19.301 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Tuh 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.0jM ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.0jM 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Mom 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:19.302 14:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.869 Waiting for block devices as requested 00:18:19.869 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.869 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:20.436 No valid GPT data, bailing 00:18:20.436 14:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:20.436 No valid GPT data, bailing 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:20.436 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:20.694 No valid GPT data, bailing 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:20.694 No valid GPT data, bailing 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:20.694 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -a 10.0.0.1 -t tcp -s 4420 00:18:20.694 00:18:20.694 Discovery Log Number of Records 2, Generation counter 2 00:18:20.694 =====Discovery Log Entry 0====== 00:18:20.694 trtype: tcp 00:18:20.694 adrfam: ipv4 00:18:20.694 subtype: current discovery subsystem 00:18:20.694 treq: not specified, sq flow control disable supported 00:18:20.694 portid: 1 00:18:20.694 trsvcid: 4420 00:18:20.694 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.694 traddr: 10.0.0.1 00:18:20.694 eflags: none 00:18:20.694 sectype: none 00:18:20.694 =====Discovery Log Entry 1====== 00:18:20.694 trtype: tcp 00:18:20.694 adrfam: ipv4 00:18:20.695 subtype: nvme subsystem 00:18:20.695 treq: not specified, sq flow control disable supported 00:18:20.695 portid: 1 00:18:20.695 trsvcid: 4420 00:18:20.695 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:20.695 traddr: 10.0.0.1 00:18:20.695 eflags: none 00:18:20.695 sectype: none 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.695 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.956 nvme0n1 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.956 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.957 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.216 nvme0n1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.216 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.475 nvme0n1 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:21.475 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.476 14:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 nvme0n1 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:21.476 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.735 nvme0n1 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:21.735 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.736 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.994 nvme0n1 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.994 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.995 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.253 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.254 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.512 nvme0n1 00:18:22.512 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.512 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.513 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.513 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.513 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.513 14:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.513 nvme0n1 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.513 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.772 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.773 nvme0n1 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.773 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 nvme0n1 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.032 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.033 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.304 nvme0n1 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.304 14:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:23.899 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.900 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 nvme0n1 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.158 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.159 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.417 nvme0n1 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.417 14:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.675 nvme0n1 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.675 nvme0n1 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.675 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.934 nvme0n1 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.934 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.212 14:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.593 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.853 nvme0n1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.853 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.421 nvme0n1 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:27.421 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.422 14:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 nvme0n1 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.681 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.682 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.682 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.682 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.940 nvme0n1 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.940 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.199 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.458 nvme0n1 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:28.458 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.459 14:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.027 nvme0n1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.027 14:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.595 nvme0n1 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.595 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.162 nvme0n1 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.162 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.163 14:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.730 nvme0n1 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.730 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.731 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.298 nvme0n1 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:31.298 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 nvme0n1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.558 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 nvme0n1 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.818 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.819 nvme0n1 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.819 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 nvme0n1 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:32.079 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.080 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 nvme0n1 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.339 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.339 nvme0n1 00:18:32.340 14:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.598 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.599 nvme0n1 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.599 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.858 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.858 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.858 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:32.858 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.859 nvme0n1 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.859 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.118 nvme0n1 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.118 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.378 nvme0n1 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.378 14:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.637 nvme0n1 00:18:33.637 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.637 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.637 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.638 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.897 nvme0n1 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.897 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.156 nvme0n1 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.156 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.157 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.416 nvme0n1 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.416 14:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.685 nvme0n1 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.685 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 nvme0n1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.945 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.512 nvme0n1 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.512 14:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 nvme0n1 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.771 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.339 nvme0n1 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.339 14:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.598 nvme0n1 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.598 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 nvme0n1 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.166 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.167 14:56:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.735 nvme0n1 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.735 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.303 nvme0n1 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.303 14:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.871 nvme0n1 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.871 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.872 14:56:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 nvme0n1 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.440 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.699 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 nvme0n1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.700 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 nvme0n1 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 nvme0n1 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.959 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.960 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.960 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.960 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.960 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 nvme0n1 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.219 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.478 nvme0n1 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.478 14:56:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.478 nvme0n1 00:18:40.478 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.478 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.478 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.478 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.478 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.479 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.737 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.738 nvme0n1 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.738 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 nvme0n1 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.997 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.256 nvme0n1 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.256 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.257 nvme0n1 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.257 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.524 14:56:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.524 nvme0n1 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.524 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:41.792 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.793 nvme0n1 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.793 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.052 nvme0n1 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.052 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.311 nvme0n1 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.311 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.570 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.571 14:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.571 nvme0n1 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.571 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.830 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.089 nvme0n1 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.089 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.090 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.348 nvme0n1 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.348 14:56:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.607 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.866 nvme0n1 00:18:43.866 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.866 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.866 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.866 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.866 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.867 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.126 nvme0n1 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.126 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.385 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.386 14:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.654 nvme0n1 00:18:44.654 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg5ZTY1ODA0ZDRlZGY0OTY2MTgxOWI4YjFhZGQxMGYDgJ+2: 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGJiMDc4NzA4YTNlZDU3YTViYjFhOTFmY2YyZDlhYzRiNWYxODVmNTA2Y2YyOWNlZWI4OTg4NjllZDRhNzBhZIm8JR8=: 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.655 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.223 nvme0n1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.223 14:56:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.875 nvme0n1 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.875 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.450 nvme0n1 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzcyYjFiZWNlZjYzNWU5MzQ3MjdmMzRjOTFjOTdkMjQzMGUxYTY3ODhjMmY1YWY4/9r8rg==: 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjZDYxNDcxMWQ5N2M0MDA0ZDE4MDI5ZjhjYTAyMDmAkbxk: 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.450 14:57:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.017 nvme0n1 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDA1MGVhMjgzZDExYTUzMzhjNTc4OWMyNmE5MDgxOTBhZGVjMmE3MzA4MWI5OTM1NGFlNTBmZjM4ZGZlY2I3ZRRqL9I=: 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.017 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.018 14:57:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 nvme0n1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 request: 00:18:47.584 { 00:18:47.584 "name": "nvme0", 00:18:47.584 "trtype": "tcp", 00:18:47.584 "traddr": "10.0.0.1", 00:18:47.584 "adrfam": "ipv4", 00:18:47.584 "trsvcid": "4420", 00:18:47.584 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:47.584 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:47.584 "prchk_reftag": false, 00:18:47.584 "prchk_guard": false, 00:18:47.584 "hdgst": false, 00:18:47.584 "ddgst": false, 00:18:47.584 "allow_unrecognized_csi": false, 00:18:47.584 "method": "bdev_nvme_attach_controller", 00:18:47.584 "req_id": 1 00:18:47.584 } 00:18:47.584 Got JSON-RPC error response 00:18:47.584 response: 00:18:47.584 { 00:18:47.584 "code": -5, 00:18:47.584 "message": "Input/output error" 00:18:47.584 } 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.584 request: 00:18:47.584 { 00:18:47.584 "name": "nvme0", 00:18:47.584 "trtype": "tcp", 00:18:47.584 "traddr": "10.0.0.1", 00:18:47.584 "adrfam": "ipv4", 00:18:47.584 "trsvcid": "4420", 00:18:47.584 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:47.584 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:47.584 "prchk_reftag": false, 00:18:47.584 "prchk_guard": false, 00:18:47.584 "hdgst": false, 00:18:47.584 "ddgst": false, 00:18:47.584 "dhchap_key": "key2", 00:18:47.584 "allow_unrecognized_csi": false, 00:18:47.584 "method": "bdev_nvme_attach_controller", 00:18:47.584 "req_id": 1 00:18:47.584 } 00:18:47.584 Got JSON-RPC error response 00:18:47.584 response: 00:18:47.584 { 00:18:47.584 "code": -5, 00:18:47.584 "message": "Input/output error" 00:18:47.584 } 00:18:47.584 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.585 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 request: 00:18:47.844 { 00:18:47.844 "name": "nvme0", 00:18:47.844 "trtype": "tcp", 00:18:47.844 "traddr": "10.0.0.1", 00:18:47.844 "adrfam": "ipv4", 00:18:47.844 "trsvcid": "4420", 00:18:47.844 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:47.844 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:47.844 "prchk_reftag": false, 00:18:47.844 "prchk_guard": false, 00:18:47.844 "hdgst": false, 00:18:47.844 "ddgst": false, 00:18:47.844 "dhchap_key": "key1", 00:18:47.844 "dhchap_ctrlr_key": "ckey2", 00:18:47.844 "allow_unrecognized_csi": false, 00:18:47.844 "method": "bdev_nvme_attach_controller", 00:18:47.844 "req_id": 1 00:18:47.844 } 00:18:47.844 Got JSON-RPC error response 00:18:47.844 response: 00:18:47.844 { 00:18:47.844 "code": -5, 00:18:47.844 "message": "Input/output error" 00:18:47.844 } 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 nvme0n1 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:47.844 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.103 request: 00:18:48.103 { 00:18:48.103 "name": "nvme0", 00:18:48.103 "dhchap_key": "key1", 00:18:48.103 "dhchap_ctrlr_key": "ckey2", 00:18:48.103 "method": "bdev_nvme_set_keys", 00:18:48.103 "req_id": 1 00:18:48.103 } 00:18:48.103 Got JSON-RPC error response 00:18:48.103 response: 00:18:48.103 { 00:18:48.103 "code": -13, 00:18:48.103 "message": "Permission denied" 00:18:48.103 } 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.103 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.104 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:48.104 14:57:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliM2M4MjM0YjQ5NjhlMjRhZGZjZDYwYTkwMzBkOGMwZjk3ZjQyZDNkNGQxNTI5YwsGkQ==: 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: ]] 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU1M2M1OWIwMTIzZjBiMzZhOTE2YjRjMDNiOGZiNzJjZTgxMGI1YTRjNTU5ZjhkeQcQzQ==: 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.040 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.299 nvme0n1 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjgwYjJlMjE4OWE4MDA3ZTA5NjM0ZTRhYTQzM2E4YTlFpQnx: 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: ]] 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTViMGUxNWJmNzk5ZmM1NzViYWRkMjI3YjI2ZGMzZGEYhU1/: 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.299 request: 00:18:49.299 { 00:18:49.299 "name": "nvme0", 00:18:49.299 "dhchap_key": "key2", 00:18:49.299 "dhchap_ctrlr_key": "ckey1", 00:18:49.299 "method": "bdev_nvme_set_keys", 00:18:49.299 "req_id": 1 00:18:49.299 } 00:18:49.299 Got JSON-RPC error response 00:18:49.299 response: 00:18:49.299 { 00:18:49.299 "code": -13, 00:18:49.299 "message": "Permission denied" 00:18:49.299 } 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:49.299 14:57:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:50.236 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.237 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.495 rmmod nvme_tcp 00:18:50.495 rmmod nvme_fabrics 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78609 ']' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78609 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78609 ']' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78609 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78609 00:18:50.495 killing process with pid 78609 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78609' 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78609 00:18:50.495 14:57:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78609 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.754 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:51.013 14:57:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:51.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.840 14:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oat /tmp/spdk.key-null.67v /tmp/spdk.key-sha256.VcQ /tmp/spdk.key-sha384.Tuh /tmp/spdk.key-sha512.Mom /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:51.840 14:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:52.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.359 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.359 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.359 00:18:52.359 real 0m35.052s 00:18:52.359 user 0m32.262s 00:18:52.359 sys 0m3.833s 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.359 ************************************ 00:18:52.359 END TEST nvmf_auth_host 00:18:52.359 ************************************ 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.359 ************************************ 00:18:52.359 START TEST nvmf_digest 00:18:52.359 ************************************ 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:52.359 * Looking for test storage... 00:18:52.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:52.359 14:57:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.619 --rc genhtml_branch_coverage=1 00:18:52.619 --rc genhtml_function_coverage=1 00:18:52.619 --rc genhtml_legend=1 00:18:52.619 --rc geninfo_all_blocks=1 00:18:52.619 --rc geninfo_unexecuted_blocks=1 00:18:52.619 00:18:52.619 ' 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.619 --rc genhtml_branch_coverage=1 00:18:52.619 --rc genhtml_function_coverage=1 00:18:52.619 --rc genhtml_legend=1 00:18:52.619 --rc geninfo_all_blocks=1 00:18:52.619 --rc geninfo_unexecuted_blocks=1 00:18:52.619 00:18:52.619 ' 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.619 --rc genhtml_branch_coverage=1 00:18:52.619 --rc genhtml_function_coverage=1 00:18:52.619 --rc genhtml_legend=1 00:18:52.619 --rc geninfo_all_blocks=1 00:18:52.619 --rc geninfo_unexecuted_blocks=1 00:18:52.619 00:18:52.619 ' 00:18:52.619 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.619 --rc genhtml_branch_coverage=1 00:18:52.619 --rc genhtml_function_coverage=1 00:18:52.619 --rc genhtml_legend=1 00:18:52.620 --rc geninfo_all_blocks=1 00:18:52.620 --rc geninfo_unexecuted_blocks=1 00:18:52.620 00:18:52.620 ' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:52.620 Cannot find device "nvmf_init_br" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:52.620 Cannot find device "nvmf_init_br2" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:52.620 Cannot find device "nvmf_tgt_br" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.620 Cannot find device "nvmf_tgt_br2" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:52.620 Cannot find device "nvmf_init_br" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:52.620 Cannot find device "nvmf_init_br2" 00:18:52.620 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:52.621 Cannot find device "nvmf_tgt_br" 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:52.621 Cannot find device "nvmf_tgt_br2" 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:52.621 Cannot find device "nvmf_br" 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:52.621 Cannot find device "nvmf_init_if" 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:52.621 Cannot find device "nvmf_init_if2" 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.621 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:52.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:18:52.880 00:18:52.880 --- 10.0.0.3 ping statistics --- 00:18:52.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.880 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:52.880 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:52.880 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:18:52.880 00:18:52.880 --- 10.0.0.4 ping statistics --- 00:18:52.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.880 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:52.880 00:18:52.880 --- 10.0.0.1 ping statistics --- 00:18:52.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.880 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:52.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:52.880 00:18:52.880 --- 10.0.0.2 ping statistics --- 00:18:52.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.880 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 ************************************ 00:18:52.880 START TEST nvmf_digest_clean 00:18:52.880 ************************************ 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:52.880 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80234 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80234 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80234 ']' 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.881 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.140 [2024-11-22 14:57:07.573891] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:18:53.140 [2024-11-22 14:57:07.573982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.140 [2024-11-22 14:57:07.728414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.140 [2024-11-22 14:57:07.784692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.140 [2024-11-22 14:57:07.784768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.140 [2024-11-22 14:57:07.784782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.140 [2024-11-22 14:57:07.784793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.140 [2024-11-22 14:57:07.784802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.140 [2024-11-22 14:57:07.785264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.400 14:57:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.400 [2024-11-22 14:57:07.946938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.400 null0 00:18:53.400 [2024-11-22 14:57:08.012246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.400 [2024-11-22 14:57:08.036430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80258 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80258 /var/tmp/bperf.sock 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80258 ']' 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.400 14:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.659 [2024-11-22 14:57:08.105034] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:18:53.659 [2024-11-22 14:57:08.105144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80258 ] 00:18:53.659 [2024-11-22 14:57:08.254849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.659 [2024-11-22 14:57:08.315975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.596 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.596 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:54.596 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:54.596 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:54.596 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:54.854 [2024-11-22 14:57:09.337246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.854 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:54.854 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.113 nvme0n1 00:18:55.113 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:55.113 14:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:55.371 Running I/O for 2 seconds... 00:18:57.243 18034.00 IOPS, 70.45 MiB/s [2024-11-22T14:57:11.908Z] 18097.50 IOPS, 70.69 MiB/s 00:18:57.243 Latency(us) 00:18:57.243 [2024-11-22T14:57:11.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:57.243 nvme0n1 : 2.01 18116.02 70.77 0.00 0.00 7059.80 6613.18 17158.52 00:18:57.243 [2024-11-22T14:57:11.908Z] =================================================================================================================== 00:18:57.243 [2024-11-22T14:57:11.908Z] Total : 18116.02 70.77 0.00 0.00 7059.80 6613.18 17158.52 00:18:57.243 { 00:18:57.243 "results": [ 00:18:57.243 { 00:18:57.243 "job": "nvme0n1", 00:18:57.243 "core_mask": "0x2", 00:18:57.243 "workload": "randread", 00:18:57.243 "status": "finished", 00:18:57.243 "queue_depth": 128, 00:18:57.243 "io_size": 4096, 00:18:57.243 "runtime": 2.005021, 00:18:57.243 "iops": 18116.019732461657, 00:18:57.243 "mibps": 70.76570207992835, 00:18:57.243 "io_failed": 0, 00:18:57.243 "io_timeout": 0, 00:18:57.243 "avg_latency_us": 7059.800905111461, 00:18:57.243 "min_latency_us": 6613.178181818182, 00:18:57.243 "max_latency_us": 17158.516363636365 00:18:57.243 } 00:18:57.243 ], 00:18:57.243 "core_count": 1 00:18:57.243 } 00:18:57.243 14:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:57.243 14:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:57.243 14:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:57.243 | select(.opcode=="crc32c") 00:18:57.243 | "\(.module_name) \(.executed)"' 00:18:57.243 14:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:57.243 14:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80258 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80258 ']' 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80258 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:57.503 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80258 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.762 killing process with pid 80258 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80258' 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80258 00:18:57.762 Received shutdown signal, test time was about 2.000000 seconds 00:18:57.762 00:18:57.762 Latency(us) 00:18:57.762 [2024-11-22T14:57:12.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.762 [2024-11-22T14:57:12.427Z] =================================================================================================================== 00:18:57.762 [2024-11-22T14:57:12.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80258 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80319 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80319 /var/tmp/bperf.sock 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80319 ']' 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.762 14:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:58.022 [2024-11-22 14:57:12.436622] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:18:58.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:58.022 Zero copy mechanism will not be used. 00:18:58.022 [2024-11-22 14:57:12.436720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80319 ] 00:18:58.022 [2024-11-22 14:57:12.584800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.022 [2024-11-22 14:57:12.646521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.957 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.957 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:58.957 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:58.957 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:58.957 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:59.215 [2024-11-22 14:57:13.647596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.216 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.216 14:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.474 nvme0n1 00:18:59.474 14:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:59.474 14:57:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:59.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:59.733 Zero copy mechanism will not be used. 00:18:59.733 Running I/O for 2 seconds... 00:19:01.603 7536.00 IOPS, 942.00 MiB/s [2024-11-22T14:57:16.268Z] 7152.00 IOPS, 894.00 MiB/s 00:19:01.603 Latency(us) 00:19:01.603 [2024-11-22T14:57:16.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:01.603 nvme0n1 : 2.00 7148.99 893.62 0.00 0.00 2234.82 1936.29 5451.40 00:19:01.603 [2024-11-22T14:57:16.268Z] =================================================================================================================== 00:19:01.603 [2024-11-22T14:57:16.268Z] Total : 7148.99 893.62 0.00 0.00 2234.82 1936.29 5451.40 00:19:01.603 { 00:19:01.603 "results": [ 00:19:01.603 { 00:19:01.603 "job": "nvme0n1", 00:19:01.603 "core_mask": "0x2", 00:19:01.603 "workload": "randread", 00:19:01.603 "status": "finished", 00:19:01.603 "queue_depth": 16, 00:19:01.603 "io_size": 131072, 00:19:01.603 "runtime": 2.00308, 00:19:01.603 "iops": 7148.990554545999, 00:19:01.603 "mibps": 893.6238193182498, 00:19:01.603 "io_failed": 0, 00:19:01.603 "io_timeout": 0, 00:19:01.603 "avg_latency_us": 2234.8225332656175, 00:19:01.603 "min_latency_us": 1936.290909090909, 00:19:01.603 "max_latency_us": 5451.403636363636 00:19:01.603 } 00:19:01.603 ], 00:19:01.603 "core_count": 1 00:19:01.603 } 00:19:01.604 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:01.604 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:01.604 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:01.604 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:01.604 | select(.opcode=="crc32c") 00:19:01.604 | "\(.module_name) \(.executed)"' 00:19:01.604 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80319 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80319 ']' 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80319 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80319 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.862 killing process with pid 80319 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80319' 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80319 00:19:01.862 Received shutdown signal, test time was about 2.000000 seconds 00:19:01.862 00:19:01.862 Latency(us) 00:19:01.862 [2024-11-22T14:57:16.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.862 [2024-11-22T14:57:16.527Z] =================================================================================================================== 00:19:01.862 [2024-11-22T14:57:16.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.862 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80319 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80379 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80379 /var/tmp/bperf.sock 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80379 ']' 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.121 14:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:02.121 [2024-11-22 14:57:16.768405] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:02.121 [2024-11-22 14:57:16.768568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80379 ] 00:19:02.380 [2024-11-22 14:57:16.910456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.380 [2024-11-22 14:57:16.957955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.380 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.380 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:02.380 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:02.380 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:02.380 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:02.946 [2024-11-22 14:57:17.381172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.946 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.946 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:03.204 nvme0n1 00:19:03.204 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:03.204 14:57:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:03.463 Running I/O for 2 seconds... 00:19:05.340 18924.00 IOPS, 73.92 MiB/s [2024-11-22T14:57:20.005Z] 19558.50 IOPS, 76.40 MiB/s 00:19:05.340 Latency(us) 00:19:05.340 [2024-11-22T14:57:20.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:05.340 nvme0n1 : 2.01 19601.39 76.57 0.00 0.00 6524.59 3098.07 15252.01 00:19:05.340 [2024-11-22T14:57:20.005Z] =================================================================================================================== 00:19:05.340 [2024-11-22T14:57:20.005Z] Total : 19601.39 76.57 0.00 0.00 6524.59 3098.07 15252.01 00:19:05.340 { 00:19:05.340 "results": [ 00:19:05.340 { 00:19:05.340 "job": "nvme0n1", 00:19:05.340 "core_mask": "0x2", 00:19:05.340 "workload": "randwrite", 00:19:05.340 "status": "finished", 00:19:05.340 "queue_depth": 128, 00:19:05.340 "io_size": 4096, 00:19:05.340 "runtime": 2.008633, 00:19:05.341 "iops": 19601.39059748595, 00:19:05.341 "mibps": 76.5679320214295, 00:19:05.341 "io_failed": 0, 00:19:05.341 "io_timeout": 0, 00:19:05.341 "avg_latency_us": 6524.587754287772, 00:19:05.341 "min_latency_us": 3098.0654545454545, 00:19:05.341 "max_latency_us": 15252.014545454545 00:19:05.341 } 00:19:05.341 ], 00:19:05.341 "core_count": 1 00:19:05.341 } 00:19:05.341 14:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:05.341 14:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:05.341 14:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:05.341 14:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:05.341 14:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:05.341 | select(.opcode=="crc32c") 00:19:05.341 | "\(.module_name) \(.executed)"' 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80379 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80379 ']' 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80379 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80379 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.599 killing process with pid 80379 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80379' 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80379 00:19:05.599 Received shutdown signal, test time was about 2.000000 seconds 00:19:05.599 00:19:05.599 Latency(us) 00:19:05.599 [2024-11-22T14:57:20.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.599 [2024-11-22T14:57:20.264Z] =================================================================================================================== 00:19:05.599 [2024-11-22T14:57:20.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.599 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80379 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80433 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80433 /var/tmp/bperf.sock 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80433 ']' 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.858 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:06.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:06.117 Zero copy mechanism will not be used. 00:19:06.117 [2024-11-22 14:57:20.538299] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:06.117 [2024-11-22 14:57:20.538392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80433 ] 00:19:06.117 [2024-11-22 14:57:20.673069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.117 [2024-11-22 14:57:20.715590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.376 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.376 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:06.376 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:06.376 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:06.376 14:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:06.635 [2024-11-22 14:57:21.141813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:06.635 14:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.635 14:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.894 nvme0n1 00:19:07.152 14:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:07.152 14:57:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:07.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:07.152 Zero copy mechanism will not be used. 00:19:07.152 Running I/O for 2 seconds... 00:19:09.464 7517.00 IOPS, 939.62 MiB/s [2024-11-22T14:57:24.129Z] 7529.50 IOPS, 941.19 MiB/s 00:19:09.464 Latency(us) 00:19:09.464 [2024-11-22T14:57:24.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.464 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:09.464 nvme0n1 : 2.00 7528.55 941.07 0.00 0.00 2120.29 1333.06 4230.05 00:19:09.464 [2024-11-22T14:57:24.129Z] =================================================================================================================== 00:19:09.464 [2024-11-22T14:57:24.129Z] Total : 7528.55 941.07 0.00 0.00 2120.29 1333.06 4230.05 00:19:09.464 { 00:19:09.464 "results": [ 00:19:09.464 { 00:19:09.464 "job": "nvme0n1", 00:19:09.464 "core_mask": "0x2", 00:19:09.464 "workload": "randwrite", 00:19:09.464 "status": "finished", 00:19:09.464 "queue_depth": 16, 00:19:09.464 "io_size": 131072, 00:19:09.464 "runtime": 2.003175, 00:19:09.464 "iops": 7528.548429368378, 00:19:09.464 "mibps": 941.0685536710472, 00:19:09.464 "io_failed": 0, 00:19:09.464 "io_timeout": 0, 00:19:09.464 "avg_latency_us": 2120.2865939683284, 00:19:09.464 "min_latency_us": 1333.0618181818181, 00:19:09.464 "max_latency_us": 4230.050909090909 00:19:09.464 } 00:19:09.464 ], 00:19:09.464 "core_count": 1 00:19:09.465 } 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:09.465 | select(.opcode=="crc32c") 00:19:09.465 | "\(.module_name) \(.executed)"' 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80433 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80433 ']' 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80433 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.465 14:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80433 00:19:09.465 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.465 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.465 killing process with pid 80433 00:19:09.465 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80433' 00:19:09.465 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80433 00:19:09.465 Received shutdown signal, test time was about 2.000000 seconds 00:19:09.465 00:19:09.465 Latency(us) 00:19:09.465 [2024-11-22T14:57:24.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.465 [2024-11-22T14:57:24.130Z] =================================================================================================================== 00:19:09.465 [2024-11-22T14:57:24.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.465 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80433 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80234 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80234 ']' 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80234 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80234 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.723 killing process with pid 80234 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80234' 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80234 00:19:09.723 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80234 00:19:09.982 00:19:09.982 real 0m17.038s 00:19:09.982 user 0m32.863s 00:19:09.982 sys 0m5.138s 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:09.982 ************************************ 00:19:09.982 END TEST nvmf_digest_clean 00:19:09.982 ************************************ 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:09.982 ************************************ 00:19:09.982 START TEST nvmf_digest_error 00:19:09.982 ************************************ 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80514 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80514 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80514 ']' 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.982 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.241 [2024-11-22 14:57:24.654758] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:10.241 [2024-11-22 14:57:24.654833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.242 [2024-11-22 14:57:24.791659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.242 [2024-11-22 14:57:24.838170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.242 [2024-11-22 14:57:24.838235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.242 [2024-11-22 14:57:24.838245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.242 [2024-11-22 14:57:24.838252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.242 [2024-11-22 14:57:24.838258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.242 [2024-11-22 14:57:24.838641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.500 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.501 [2024-11-22 14:57:24.959054] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.501 14:57:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.501 [2024-11-22 14:57:25.037508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.501 null0 00:19:10.501 [2024-11-22 14:57:25.097310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.501 [2024-11-22 14:57:25.121482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80533 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80533 /var/tmp/bperf.sock 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80533 ']' 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.501 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.760 [2024-11-22 14:57:25.172144] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:10.760 [2024-11-22 14:57:25.172224] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80533 ] 00:19:10.760 [2024-11-22 14:57:25.311727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.760 [2024-11-22 14:57:25.363197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.019 [2024-11-22 14:57:25.434197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:11.019 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.019 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:11.019 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:11.019 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.278 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.536 nvme0n1 00:19:11.536 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:11.536 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.536 14:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:11.537 14:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.537 14:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:11.537 14:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:11.537 Running I/O for 2 seconds... 00:19:11.537 [2024-11-22 14:57:26.117438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.117488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.117502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.130760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.130791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.130802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.144089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.144131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.157346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.157384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.157396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.170605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.170636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.170647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.183895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.183925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.537 [2024-11-22 14:57:26.197108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.537 [2024-11-22 14:57:26.197138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.537 [2024-11-22 14:57:26.197150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.210404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.210433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.210445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.223691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.223723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.223734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.236961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.237002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.250165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.250195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.250206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.263392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.263421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.263431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.276674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.276704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.276715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.289910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.289940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.289951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.303169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.303199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.303210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.316419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.316458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.316470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.329695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.329724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.329736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.342968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.342997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.343008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.356225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.356266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.369455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.796 [2024-11-22 14:57:26.369483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.796 [2024-11-22 14:57:26.369494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.796 [2024-11-22 14:57:26.382724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.382753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.797 [2024-11-22 14:57:26.395974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.396004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.396015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.797 [2024-11-22 14:57:26.409227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.409266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.797 [2024-11-22 14:57:26.422505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.422535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.422546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.797 [2024-11-22 14:57:26.435755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.435786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.435797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.797 [2024-11-22 14:57:26.449002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:11.797 [2024-11-22 14:57:26.449033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.797 [2024-11-22 14:57:26.449044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.462233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.462263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.462275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.475482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.475511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.488756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.488785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.488796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.502014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.502043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.502054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.515236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.515265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.515276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.528551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.528592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.528603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.542447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.542477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.542488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.556458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.556486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.556497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.570308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.570339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.570350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.585687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.585718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.585729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.599757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.599788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.599799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.614226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.614258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.614270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.628907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.628960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.642867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.642910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.642921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.656182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.656223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.656234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.669443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.669482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.682674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.682715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.682726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.695948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.695990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.696001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.057 [2024-11-22 14:57:26.709230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.057 [2024-11-22 14:57:26.709260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.057 [2024-11-22 14:57:26.709271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.722645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.722685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.722695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.735987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.736015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.736026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.749966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.749990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.750001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.764382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.764410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.764420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.777678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.777707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.777718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.791041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.791071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.791082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.804959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.804989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.818763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.818791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.832076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.832105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.832116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.845392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.845421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.858969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.858999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.859010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.872946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.872975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.872986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.886164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.886209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.886220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.899633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.899662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.899672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.912949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.912978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.912989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.926545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.926574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.926585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.939903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.939943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.939955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.953327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.953356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.953367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.317 [2024-11-22 14:57:26.972723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.317 [2024-11-22 14:57:26.972752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.317 [2024-11-22 14:57:26.972764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:26.986248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:26.986279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:26.986290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.000223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.000254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.000265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.014214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.014245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.014256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.027583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.027614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.027624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.040773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.040802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.040813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.053978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.054007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.054018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.067247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.067277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.067288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.080611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.080641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.080651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.094051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.094082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.094093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 18723.00 IOPS, 73.14 MiB/s [2024-11-22T14:57:27.242Z] [2024-11-22 14:57:27.108555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.108585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.108595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.121761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.121791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.121801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.135007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.135036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.135047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.148627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.148659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.148670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.162622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.162652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.175880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.175910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.175920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.189161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.189190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.189201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.202358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.202393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.202404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.215678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.215708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.215719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.577 [2024-11-22 14:57:27.228963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.577 [2024-11-22 14:57:27.228992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.577 [2024-11-22 14:57:27.229003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.836 [2024-11-22 14:57:27.242243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.836 [2024-11-22 14:57:27.242280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.836 [2024-11-22 14:57:27.242290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.836 [2024-11-22 14:57:27.255534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.255562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.255573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.268775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.268804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.268814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.281983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.282012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.282023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.295293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.295321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.295331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.308555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.308583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.308593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.321883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.321912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.321923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.335140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.335169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.335180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.348417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.348445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.348456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.361686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.361716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.361728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.374939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.374968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.374979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.388205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.388235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.388246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.401453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.401493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.414924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.414954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.414966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.428901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.428951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.442401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.442429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.442440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.455655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.455683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.455694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.468858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.468886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.468896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.482060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.482088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.482099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:12.837 [2024-11-22 14:57:27.495426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:12.837 [2024-11-22 14:57:27.495453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.837 [2024-11-22 14:57:27.495463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.508634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.508663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.508675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.521879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.521906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.521917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.535073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.535102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.535113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.548336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.548364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.548384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.562006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.562034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.562045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.576014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.576042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.576053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.590014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.590042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.590053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.603423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.603462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.616916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.616944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.616955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.630487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.630515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.630526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.643691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.643719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.643730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.656989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.657017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.657028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.670188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.670218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.670229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.683440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.683474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.683486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.696728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.696756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.696767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.709931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.709958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.709969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.723320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.723349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.723359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.736752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.736781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.736791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.097 [2024-11-22 14:57:27.750730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.097 [2024-11-22 14:57:27.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.097 [2024-11-22 14:57:27.750769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.764663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.764691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.764703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.778000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.778027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.778039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.791408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.791435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.791446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.804802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.804841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.804852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.818168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.818210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.818221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.837232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.837261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.837272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.850738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.850767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.850778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.863955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.863984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.863995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.877215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.877255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.877266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.890520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.890550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.890560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.903736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.903777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.903788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.917483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.917512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.917523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.931051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.931079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.931090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.944450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.944478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.944488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.958104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.958138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.958149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.971479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.971521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.971532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.984859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.984889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.984899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:27.998142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:27.998171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:27.998182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.357 [2024-11-22 14:57:28.011331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.357 [2024-11-22 14:57:28.011360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.357 [2024-11-22 14:57:28.011381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.616 [2024-11-22 14:57:28.024596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.616 [2024-11-22 14:57:28.024624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.616 [2024-11-22 14:57:28.024635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.616 [2024-11-22 14:57:28.038247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.616 [2024-11-22 14:57:28.038286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.616 [2024-11-22 14:57:28.038297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.616 [2024-11-22 14:57:28.052113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.616 [2024-11-22 14:57:28.052141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.616 [2024-11-22 14:57:28.052152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.616 [2024-11-22 14:57:28.065384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.616 [2024-11-22 14:57:28.065422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.616 [2024-11-22 14:57:28.065433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.616 [2024-11-22 14:57:28.078698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.617 [2024-11-22 14:57:28.078727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.617 [2024-11-22 14:57:28.078737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.617 [2024-11-22 14:57:28.092103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18c92c0) 00:19:13.617 [2024-11-22 14:57:28.092132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:13.617 [2024-11-22 14:57:28.092142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:13.617 18786.00 IOPS, 73.38 MiB/s 00:19:13.617 Latency(us) 00:19:13.617 [2024-11-22T14:57:28.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:13.617 nvme0n1 : 2.00 18805.56 73.46 0.00 0.00 6802.18 6225.92 26214.40 00:19:13.617 [2024-11-22T14:57:28.282Z] =================================================================================================================== 00:19:13.617 [2024-11-22T14:57:28.282Z] Total : 18805.56 73.46 0.00 0.00 6802.18 6225.92 26214.40 00:19:13.617 { 00:19:13.617 "results": [ 00:19:13.617 { 00:19:13.617 "job": "nvme0n1", 00:19:13.617 "core_mask": "0x2", 00:19:13.617 "workload": "randread", 00:19:13.617 "status": "finished", 00:19:13.617 "queue_depth": 128, 00:19:13.617 "io_size": 4096, 00:19:13.617 "runtime": 2.004726, 00:19:13.617 "iops": 18805.562455916668, 00:19:13.617 "mibps": 73.45922834342448, 00:19:13.617 "io_failed": 0, 00:19:13.617 "io_timeout": 0, 00:19:13.617 "avg_latency_us": 6802.179691536049, 00:19:13.617 "min_latency_us": 6225.92, 00:19:13.617 "max_latency_us": 26214.4 00:19:13.617 } 00:19:13.617 ], 00:19:13.617 "core_count": 1 00:19:13.617 } 00:19:13.617 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:13.617 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:13.617 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:13.617 | .driver_specific 00:19:13.617 | .nvme_error 00:19:13.617 | .status_code 00:19:13.617 | .command_transient_transport_error' 00:19:13.617 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80533 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80533 ']' 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80533 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80533 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:13.876 killing process with pid 80533 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80533' 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80533 00:19:13.876 Received shutdown signal, test time was about 2.000000 seconds 00:19:13.876 00:19:13.876 Latency(us) 00:19:13.876 [2024-11-22T14:57:28.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.876 [2024-11-22T14:57:28.541Z] =================================================================================================================== 00:19:13.876 [2024-11-22T14:57:28.541Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.876 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80533 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80586 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80586 /var/tmp/bperf.sock 00:19:14.134 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80586 ']' 00:19:14.135 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:14.135 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:14.135 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:14.135 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.135 14:57:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:14.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:14.135 Zero copy mechanism will not be used. 00:19:14.135 [2024-11-22 14:57:28.746438] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:14.135 [2024-11-22 14:57:28.746533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80586 ] 00:19:14.393 [2024-11-22 14:57:28.887077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.393 [2024-11-22 14:57:28.936544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.393 [2024-11-22 14:57:29.008030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:15.354 14:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:15.637 nvme0n1 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:15.637 14:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:15.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:15.912 Zero copy mechanism will not be used. 00:19:15.912 Running I/O for 2 seconds... 00:19:15.912 [2024-11-22 14:57:30.332896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.332949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.332964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.336894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.336927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.336938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.340501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.340531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.340542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.344156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.344187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.344198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.347749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.347779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.347790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.351378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.351419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.351432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.354939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.354968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.354979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.358537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.358567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.358577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.362113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.362149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.362160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.365745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.365775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.365786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.369376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.912 [2024-11-22 14:57:30.369414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.912 [2024-11-22 14:57:30.369426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.912 [2024-11-22 14:57:30.372980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.373010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.373021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.376615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.376644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.376656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.380358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.380398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.380410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.384106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.384137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.384148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.387846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.387877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.391650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.391681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.391692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.395442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.395507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.399117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.399157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.402753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.402782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.402793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.406381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.406410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.406420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.409953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.409983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.409997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.413522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.413550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.413560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.417069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.417099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.417110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.420639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.420668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.420678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.424219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.424249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.424260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.427833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.427863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.427874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.431353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.431394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.431405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.435072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.435102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.435112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.438621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.438650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.438661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.442210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.442239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.442250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.445777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.445806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.445817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.449350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.449401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.452928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.913 [2024-11-22 14:57:30.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.913 [2024-11-22 14:57:30.452969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.913 [2024-11-22 14:57:30.456610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.456639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.456649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.460228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.460258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.460269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.463798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.463828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.463839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.467418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.467447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.467457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.470982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.471011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.471022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.474592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.474621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.478255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.478284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.478295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.481889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.481919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.481929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.485457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.485496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.489026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.489055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.489067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.492655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.492684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.492694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.496238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.496267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.496278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.499874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.499904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.499914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.503464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.503501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.503513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.507013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.507043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.507053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.510636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.510667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.510678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.514330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.514361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.518053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.518084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.518094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.521784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.521814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.521825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.525550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.525580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.525590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.529231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.529262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.529273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.914 [2024-11-22 14:57:30.532973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.914 [2024-11-22 14:57:30.533002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.914 [2024-11-22 14:57:30.533013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.536758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.536788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.536799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.540275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.540305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.540316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.543863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.543892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.543903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.547428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.547456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.547466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.551030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.551060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.551070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.554610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.554639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.554649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.558170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.558201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.558211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.561764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.561793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.561803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.565360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.565399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.565411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.915 [2024-11-22 14:57:30.568956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:15.915 [2024-11-22 14:57:30.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.915 [2024-11-22 14:57:30.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.176 [2024-11-22 14:57:30.572545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.176 [2024-11-22 14:57:30.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.176 [2024-11-22 14:57:30.572585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.176 [2024-11-22 14:57:30.576113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.176 [2024-11-22 14:57:30.576143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.176 [2024-11-22 14:57:30.576154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.176 [2024-11-22 14:57:30.579694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.176 [2024-11-22 14:57:30.579746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.176 [2024-11-22 14:57:30.579758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.176 [2024-11-22 14:57:30.583246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.176 [2024-11-22 14:57:30.583274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.176 [2024-11-22 14:57:30.583285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.176 [2024-11-22 14:57:30.586822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.586862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.586877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.590401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.590430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.590441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.594018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.594048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.597656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.597687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.597697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.601222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.601262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.604823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.604853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.604864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.608499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.608529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.608539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.612126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.612157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.612167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.616003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.616033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.616045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.619721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.619777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.623479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.623508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.623519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.627244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.627273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.627284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.630862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.630892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.630902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.634507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.634536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.634547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.638089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.638119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.638130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.641765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.641795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.641806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.645315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.645344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.645355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.648847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.648876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.648886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.652423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.652450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.652461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.655942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.655971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.655982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.659512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.177 [2024-11-22 14:57:30.659542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.177 [2024-11-22 14:57:30.659553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.177 [2024-11-22 14:57:30.663059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.663087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.663098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.666704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.666733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.666744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.670229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.670258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.670269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.673757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.673787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.673797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.677267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.677297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.677307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.680828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.680858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.680868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.684351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.684390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.684401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.687945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.687974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.687985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.691509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.691537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.691548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.695075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.695103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.695114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.698651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.698681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.698692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.702195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.702224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.702235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.705879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.705919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.709595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.709624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.709635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.713166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.713196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.713206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.716731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.716761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.716772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.720314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.720344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.720354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.723989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.724018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.724029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.727608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.727637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.727647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.731255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.731284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.731294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.734788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.734817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.734828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.738452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.738482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.738509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.178 [2024-11-22 14:57:30.742145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.178 [2024-11-22 14:57:30.742176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.178 [2024-11-22 14:57:30.742187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.745868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.745897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.745908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.749671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.749702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.749712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.753497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.753526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.753537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.757158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.757188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.757199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.760734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.760764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.760774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.764427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.764458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.764469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.768194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.768225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.768237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.771969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.772000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.772011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.775811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.775842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.775853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.779567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.779598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.779609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.783276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.783306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.783316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.787045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.787077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.787089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.790721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.790752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.790762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.794525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.794555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.794566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.798265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.798298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.798309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.802009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.802039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.802049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.805633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.805662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.805673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.809238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.809268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.809278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.812814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.812845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.812855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.816425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.816454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.819978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.820008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.823668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.823698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.823709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.179 [2024-11-22 14:57:30.827335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.179 [2024-11-22 14:57:30.827365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.179 [2024-11-22 14:57:30.827390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.180 [2024-11-22 14:57:30.831015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.180 [2024-11-22 14:57:30.831046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.180 [2024-11-22 14:57:30.831057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.180 [2024-11-22 14:57:30.834787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.180 [2024-11-22 14:57:30.834816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.180 [2024-11-22 14:57:30.834826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.838585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.838614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.842323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.842354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.842365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.845918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.845948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.845958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.849446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.849475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.849485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.853063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.853093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.853108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.856758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.856787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.856798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.860259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.860288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.860299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.863843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.863873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.863883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.867358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.867398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.867409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.870922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.870952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.870963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.874562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.874602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.878098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.878128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.881666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.881695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.881706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.885243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.885274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.441 [2024-11-22 14:57:30.885285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.441 [2024-11-22 14:57:30.888833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.441 [2024-11-22 14:57:30.888863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.888874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.892382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.892410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.892420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.895928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.895969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.899570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.899601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.899612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.903213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.903243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.903253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.906816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.906846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.906857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.910448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.910478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.910504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.914051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.914081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.914091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.917774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.917804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.917815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.921476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.921506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.921517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.925107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.925137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.925148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.928715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.928746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.928757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.932240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.932270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.932280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.935837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.935867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.935878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.939373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.939414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.943019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.943047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.943058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.946695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.946725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.946737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.950382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.950424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.950435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.954058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.954088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.954099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.957819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.957849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.957860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.961575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.961604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.961614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.965156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.965186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.965197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.968736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.968771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.968783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.972406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.972435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.975971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.976000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.976011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.979561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.979591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.979601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.983062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.983090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.442 [2024-11-22 14:57:30.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.442 [2024-11-22 14:57:30.986685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.442 [2024-11-22 14:57:30.986714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:30.986726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:30.990210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:30.990240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:30.990251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:30.993722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:30.993751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:30.993762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:30.997267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:30.997299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:30.997310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.000938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.000968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.000979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.004581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.004613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.008118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.008149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.008159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.011664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.011694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.011704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.015165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.015195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.015206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.018740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.018770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.018781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.022264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.022295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.022306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.025826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.025856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.025867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.029328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.029359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.029380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.032897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.032928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.032938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.036531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.036561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.036572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.040207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.040238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.040250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.043875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.043905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.043915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.047667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.047698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.047709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.051307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.051338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.051349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.054923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.054953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.054964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.058703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.058734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.058744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.062380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.062419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.062431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.066095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.066126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.066137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.069683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.069713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.073184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.073214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.073224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.076720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.076750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.076760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.080266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.080297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.443 [2024-11-22 14:57:31.080307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.443 [2024-11-22 14:57:31.083894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.443 [2024-11-22 14:57:31.083924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.444 [2024-11-22 14:57:31.083934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.444 [2024-11-22 14:57:31.087461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.444 [2024-11-22 14:57:31.087498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.444 [2024-11-22 14:57:31.087509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.444 [2024-11-22 14:57:31.090983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.444 [2024-11-22 14:57:31.091014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.444 [2024-11-22 14:57:31.091024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.444 [2024-11-22 14:57:31.094508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.444 [2024-11-22 14:57:31.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.444 [2024-11-22 14:57:31.094549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.444 [2024-11-22 14:57:31.098052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.444 [2024-11-22 14:57:31.098082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.444 [2024-11-22 14:57:31.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.101666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.101698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.101709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.105281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.105311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.108880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.108911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.108922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.112434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.112463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.112473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.115940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.115970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.115981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.119526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.119555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.119568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.123054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.123083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.123093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.126607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.130237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.130267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.130278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.133829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.133859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.133870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.137389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.137418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.137429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.140871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.140901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.140911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.144368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.144408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.144418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.147914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.147955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.151458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.151495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.151506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.154972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.155002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.155012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.158537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.158577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.162116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.162145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.162156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.165688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.165718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.165728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.169231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.169272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.172774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.172804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.172815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.176275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.176305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.176315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.179817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.179847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.179858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.183422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.705 [2024-11-22 14:57:31.183462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.705 [2024-11-22 14:57:31.187030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.705 [2024-11-22 14:57:31.187060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.187071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.190615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.190645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.190655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.194138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.194168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.194179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.197728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.197758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.197768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.201257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.201287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.201298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.204867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.204897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.204908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.208361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.208404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.208415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.211908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.211938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.215404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.215432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.215443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.218933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.218964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.222484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.222513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.222523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.226000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.226031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.226041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.229545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.229574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.229585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.233058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.233089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.233099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.236766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.236798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.236809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.240581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.240610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.240621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.244299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.244330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.244342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.247999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.248029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.248040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.251585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.251615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.251625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.255112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.255141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.255152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.258764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.258794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.258804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.262427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.262457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.262467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.265973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.266003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.266014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.269523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.269552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.269562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.273045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.273075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.273085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.276643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.276673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.276683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.280224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.280254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.706 [2024-11-22 14:57:31.280265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.706 [2024-11-22 14:57:31.283729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.706 [2024-11-22 14:57:31.283759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.283769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.287243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.287272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.287283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.290789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.290819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.294285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.294315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.294325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.298056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.298086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.298099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.301832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.301863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.301873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.305521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.305568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.305579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.309357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.309398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.309419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.313069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.313100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.313111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.316873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.316904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.316916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.320809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.320841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.320852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.707 8478.00 IOPS, 1059.75 MiB/s [2024-11-22T14:57:31.372Z] [2024-11-22 14:57:31.326092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.326123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.326150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.330014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.330045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.330056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.333929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.333960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.333971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.337588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.337619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.337630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.341180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.341210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.341221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.344732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.344762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.344772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.348264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.348294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.348304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.351771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.351801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.351811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.355284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.355314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.358835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.358865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.358876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.707 [2024-11-22 14:57:31.362332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.707 [2024-11-22 14:57:31.362362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.707 [2024-11-22 14:57:31.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.365901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.365931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.365942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.369484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.369514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.369525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.372989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.373019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.373030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.376532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.376572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.376582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.380113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.380144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.380154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.383649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.383679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.383690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.387158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.387188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.387199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.390719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.394268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.394298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.394309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.397851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.397881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.397891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.401490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.401519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.401530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.405051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.405080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.405091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.408641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.408670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.408681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.412192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.968 [2024-11-22 14:57:31.412223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.968 [2024-11-22 14:57:31.412234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.968 [2024-11-22 14:57:31.415702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.415733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.415744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.419228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.419257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.419267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.422758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.422788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.422799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.426220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.426252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.426278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.429932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.429963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.429974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.433474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.433503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.433516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.437056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.437086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.437097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.440631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.440671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.440682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.444190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.444219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.444230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.447748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.447778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.447789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.451349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.451398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.451409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.454947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.454977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.454987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.458521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.458550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.462076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.462106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.462116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.465693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.465723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.465733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.469236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.469266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.469277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.472794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.472824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.472835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.476219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.476249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.476259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.479764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.479795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.479805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.483214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.483243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.483254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.486799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.486828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.490337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.490368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.490391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.493883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.493923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.493933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.497455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.497485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.497496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.500984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.501014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.501024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.504514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.969 [2024-11-22 14:57:31.504543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.969 [2024-11-22 14:57:31.504554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.969 [2024-11-22 14:57:31.508035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.508067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.508078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.511716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.511747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.511758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.515423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.515452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.515463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.519087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.519117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.519143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.522794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.522824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.522834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.526418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.526447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.526457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.529948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.529977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.529987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.533654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.533694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.537220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.537252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.537263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.540808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.540838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.540848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.544353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.544394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.544405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.547943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.547973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.547983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.551505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.551534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.551544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.555073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.555102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.555113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.558648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.562212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.562241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.562252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.565710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.565750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.569312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.569342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.569353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.572880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.572910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.572920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.576406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.576435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.576446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.579926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.579956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.579966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.583506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.583535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.583546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.587062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.587092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.587103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.590795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.590826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.590837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.594508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.594537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.970 [2024-11-22 14:57:31.594548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.970 [2024-11-22 14:57:31.598111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.970 [2024-11-22 14:57:31.598159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.598169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.601804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.601834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.601844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.605331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.605362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.605385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.608954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.608984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.608995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.612524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.612564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.616081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.616122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.619662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.619702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.619713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.623266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.971 [2024-11-22 14:57:31.626791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:16.971 [2024-11-22 14:57:31.626820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.971 [2024-11-22 14:57:31.626831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.630338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.630380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.630392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.633867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.633896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.633907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.637423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.637452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.637463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.641010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.641040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.641051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.644623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.644654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.644665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.648221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.648244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.648254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.651917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.651948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.651959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.655645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.655675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.655686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.659317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.659355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.659367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.663071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.663101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.663112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.666822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.666851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.666862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.670505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.670535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.670546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.674253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.674283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.674294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.678040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.678071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.678081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.681781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.681812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.681822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.685569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.685599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.685610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.689297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.689330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.689341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.693013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.693043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.693053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.696610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.696639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.696650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.700167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.700197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.700208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.703684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.703724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.707193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.707223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.707233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.710782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.710812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.710822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.714420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.233 [2024-11-22 14:57:31.714450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.233 [2024-11-22 14:57:31.714461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.233 [2024-11-22 14:57:31.718128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.718170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.721843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.721883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.725620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.725651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.725662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.729329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.729361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.729383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.733031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.733061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.733072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.736500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.736529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.736540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.740025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.740055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.743580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.743610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.747147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.747177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.747188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.750717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.750748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.750758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.754213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.754253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.757720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.757749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.757760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.761347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.761386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.761398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.764900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.764929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.768458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.768487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.768498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.771979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.772009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.772020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.775463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.775501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.775512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.778960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.778990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.779000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.782541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.782570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.782581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.786112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.786142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.789629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.789659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.789669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.793195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.793225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.793236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.796696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.796726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.796737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.800262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.800293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.800303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.803791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.803821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.803832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.807351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.807390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.807401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.810847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.234 [2024-11-22 14:57:31.810877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.234 [2024-11-22 14:57:31.810888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.234 [2024-11-22 14:57:31.814472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.814501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.814512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.818030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.818061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.821764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.821794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.821805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.825511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.825541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.825551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.829239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.829280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.832970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.833000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.833010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.836629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.836658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.840275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.840305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.843880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.843921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.847550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.847581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.851142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.851184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.854701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.854731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.854742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.858275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.858305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.858315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.861941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.861972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.861983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.865603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.865633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.865643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.869162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.869193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.869204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.872818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.872848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.872858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.876484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.876515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.876526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.880113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.880159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.880171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.883743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.883775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.883800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.887399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.887427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.887438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.235 [2024-11-22 14:57:31.891036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.235 [2024-11-22 14:57:31.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.235 [2024-11-22 14:57:31.891077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.495 [2024-11-22 14:57:31.894766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.894796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.894807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.898479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.898508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.898518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.902131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.902178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.902189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.905814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.905843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.905854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.909411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.909440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.909451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.913104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.913134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.913145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.916734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.916764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.920532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.920576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.920586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.924250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.924280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.927893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.927924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.927934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.931531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.931572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.931584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.935228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.935258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.935269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.938872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.938902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.938913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.942473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.942502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.942513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.946113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.946142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.946152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.949748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.949778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.949788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.953502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.953538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.953549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.957260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.957290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.957301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.960976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.961006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.961017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.964607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.964636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.964647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.968140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.968171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.968181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.971717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.971758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.975267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.975296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.975307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.978915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.978955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.982508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.982537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.982547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.986058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.986088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.986098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.989669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.989699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.989710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.993251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.993281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.993291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:31.996809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:31.996839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:31.996849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.000334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.000364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.000386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.003863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.003894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.007444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.007480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.007493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.011015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.011045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.011055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.014581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.014621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.018068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.018098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.018108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.021682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.025237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.496 [2024-11-22 14:57:32.025267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.496 [2024-11-22 14:57:32.025277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.496 [2024-11-22 14:57:32.028828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.028858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.028868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.032413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.032442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.032453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.035896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.035928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.035938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.039444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.039492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.043008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.046596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.046627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.046638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.050075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.050106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.050116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.053580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.053610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.053621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.057092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.057121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.060621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.060651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.060662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.064167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.064197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.064208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.067692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.067722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.067732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.071247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.071276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.071287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.074790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.074819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.074829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.078259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.078288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.078298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.081832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.081872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.085453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.085483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.085494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.088977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.089008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.089018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.092566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.092596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.092607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.096118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.096148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.096159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.099684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.099715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.099725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.103227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.103257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.103267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.106770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.106799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.106810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.110274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.110303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.110314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.113891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.113932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.117471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.117500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.117511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.120950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.120979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.120990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.124493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.124523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.124533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.128084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.128114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.128124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.131652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.131682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.135274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.135314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.138936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.138987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.142620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.142650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.142660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.146198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.146228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.149868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.149899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.497 [2024-11-22 14:57:32.153497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.497 [2024-11-22 14:57:32.153528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.497 [2024-11-22 14:57:32.153553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.157185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.157216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.157226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.160795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.160826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.160836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.164311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.164341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.164352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.167809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.167839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.167850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.171349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.171390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.171402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.174921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.174951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.174961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.178472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.178512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.181979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.182009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.182019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.185553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.185582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.185593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.189154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.189186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.189196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.192711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.192740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.192751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.196419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.196448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.196459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.200073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.200103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.203733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.203775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.207323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.207353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.207365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.210899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.210929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.210939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.214464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.214493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.214504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.218062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.218092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.218103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.221672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.221712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.225194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.225224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.225234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.228677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.228707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.232240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.232270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.232280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.235839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.235869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.235879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.239328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.239367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.239389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.242936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.242967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.242977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.246578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.246608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.246618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.250100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.250131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.250141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.253680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.253710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.253720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.257193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.257223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.257233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.260646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.260687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.264200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.264231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.264242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.267772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.757 [2024-11-22 14:57:32.267802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.757 [2024-11-22 14:57:32.267813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.757 [2024-11-22 14:57:32.271323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.271352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.271362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.275062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.275093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.275104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.278696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.278726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.278736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.282339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.282394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.285961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.285991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.286002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.289631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.289661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.289672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.293285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.293315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.293326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.296872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.296902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.296912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.300514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.300544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.300555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.304151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.304197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.304208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.307802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.307832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.307843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.311285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.311315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.311326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.314839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.314869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.314880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.318329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.318359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.318382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:17.758 [2024-11-22 14:57:32.321819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16be400) 00:19:17.758 [2024-11-22 14:57:32.321848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.758 [2024-11-22 14:57:32.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:17.758 8548.00 IOPS, 1068.50 MiB/s 00:19:17.758 Latency(us) 00:19:17.758 [2024-11-22T14:57:32.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.758 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:17.758 nvme0n1 : 2.00 8546.81 1068.35 0.00 0.00 1869.38 1675.64 12392.26 00:19:17.758 [2024-11-22T14:57:32.423Z] =================================================================================================================== 00:19:17.758 [2024-11-22T14:57:32.423Z] Total : 8546.81 1068.35 0.00 0.00 1869.38 1675.64 12392.26 00:19:17.758 { 00:19:17.758 "results": [ 00:19:17.758 { 00:19:17.758 "job": "nvme0n1", 00:19:17.758 "core_mask": "0x2", 00:19:17.758 "workload": "randread", 00:19:17.758 "status": "finished", 00:19:17.758 "queue_depth": 16, 00:19:17.758 "io_size": 131072, 00:19:17.758 "runtime": 2.00215, 00:19:17.758 "iops": 8546.812176909822, 00:19:17.758 "mibps": 1068.3515221137277, 00:19:17.758 "io_failed": 0, 00:19:17.758 "io_timeout": 0, 00:19:17.758 "avg_latency_us": 1869.3818725827703, 00:19:17.758 "min_latency_us": 1675.6363636363637, 00:19:17.758 "max_latency_us": 12392.261818181818 00:19:17.758 } 00:19:17.758 ], 00:19:17.758 "core_count": 1 00:19:17.758 } 00:19:17.758 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:17.758 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:17.758 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:17.758 | .driver_specific 00:19:17.758 | .nvme_error 00:19:17.758 | .status_code 00:19:17.758 | .command_transient_transport_error' 00:19:17.758 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 552 > 0 )) 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80586 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80586 ']' 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80586 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80586 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.016 killing process with pid 80586 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80586' 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80586 00:19:18.016 Received shutdown signal, test time was about 2.000000 seconds 00:19:18.016 00:19:18.016 Latency(us) 00:19:18.016 [2024-11-22T14:57:32.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.016 [2024-11-22T14:57:32.681Z] =================================================================================================================== 00:19:18.016 [2024-11-22T14:57:32.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.016 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80586 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80646 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80646 /var/tmp/bperf.sock 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80646 ']' 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.273 14:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.273 [2024-11-22 14:57:32.931661] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:18.273 [2024-11-22 14:57:32.931747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80646 ] 00:19:18.531 [2024-11-22 14:57:33.062147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.531 [2024-11-22 14:57:33.104782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.531 [2024-11-22 14:57:33.176274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.789 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.789 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:18.789 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:18.789 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:19.047 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:19.304 nvme0n1 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:19.304 14:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:19.304 Running I/O for 2 seconds... 00:19:19.304 [2024-11-22 14:57:33.944200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7100 00:19:19.304 [2024-11-22 14:57:33.945505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.304 [2024-11-22 14:57:33.945552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:19.304 [2024-11-22 14:57:33.956957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7970 00:19:19.304 [2024-11-22 14:57:33.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.304 [2024-11-22 14:57:33.958231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:33.969951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f81e0 00:19:19.562 [2024-11-22 14:57:33.971161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:33.971192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:33.982630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f8a50 00:19:19.562 [2024-11-22 14:57:33.983835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:33.983866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:33.995176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f92c0 00:19:19.562 [2024-11-22 14:57:33.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:33.996405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.007774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f9b30 00:19:19.562 [2024-11-22 14:57:34.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.008961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.020272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fa3a0 00:19:19.562 [2024-11-22 14:57:34.021425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.021457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.032818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fac10 00:19:19.562 [2024-11-22 14:57:34.033942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.033973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.045317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fb480 00:19:19.562 [2024-11-22 14:57:34.046439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.046469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.057809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fbcf0 00:19:19.562 [2024-11-22 14:57:34.058905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.058935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.070693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fc560 00:19:19.562 [2024-11-22 14:57:34.071831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.071861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.083535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fcdd0 00:19:19.562 [2024-11-22 14:57:34.084628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.084659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.096218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fd640 00:19:19.562 [2024-11-22 14:57:34.097338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.108922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fdeb0 00:19:19.562 [2024-11-22 14:57:34.109957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.109988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.121433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fe720 00:19:19.562 [2024-11-22 14:57:34.122461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.122491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.133979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ff3c8 00:19:19.562 [2024-11-22 14:57:34.134985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.135016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.151726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ff3c8 00:19:19.562 [2024-11-22 14:57:34.153664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.153695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.164264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fe720 00:19:19.562 [2024-11-22 14:57:34.166185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.166214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.176770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fdeb0 00:19:19.562 [2024-11-22 14:57:34.178730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.178759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.189723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fd640 00:19:19.562 [2024-11-22 14:57:34.191652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.191683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.202630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fcdd0 00:19:19.562 [2024-11-22 14:57:34.204563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:19.562 [2024-11-22 14:57:34.215534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fc560 00:19:19.562 [2024-11-22 14:57:34.217421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.562 [2024-11-22 14:57:34.217451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:19.819 [2024-11-22 14:57:34.228189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fbcf0 00:19:19.819 [2024-11-22 14:57:34.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.230068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.240727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fb480 00:19:19.820 [2024-11-22 14:57:34.242613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.242643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.253510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fac10 00:19:19.820 [2024-11-22 14:57:34.255319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.255349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.266476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fa3a0 00:19:19.820 [2024-11-22 14:57:34.268388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.268418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.279687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f9b30 00:19:19.820 [2024-11-22 14:57:34.281581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.292888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f92c0 00:19:19.820 [2024-11-22 14:57:34.294766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.294796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.306065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f8a50 00:19:19.820 [2024-11-22 14:57:34.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.308014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.319215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f81e0 00:19:19.820 [2024-11-22 14:57:34.320972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.321002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.331747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7970 00:19:19.820 [2024-11-22 14:57:34.333480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.333512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.344208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7100 00:19:19.820 [2024-11-22 14:57:34.345927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.345957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.356769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f6890 00:19:19.820 [2024-11-22 14:57:34.358468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.358498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.369249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f6020 00:19:19.820 [2024-11-22 14:57:34.370940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.370969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.381775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f57b0 00:19:19.820 [2024-11-22 14:57:34.383444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.383489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.394270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f4f40 00:19:19.820 [2024-11-22 14:57:34.395934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.395963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.406766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f46d0 00:19:19.820 [2024-11-22 14:57:34.408414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.408444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.419292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f3e60 00:19:19.820 [2024-11-22 14:57:34.420928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.420959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.431807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f35f0 00:19:19.820 [2024-11-22 14:57:34.433426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.433456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.444307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f2d80 00:19:19.820 [2024-11-22 14:57:34.445910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.445939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.456872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f2510 00:19:19.820 [2024-11-22 14:57:34.458456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.458485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:19.820 [2024-11-22 14:57:34.469348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f1ca0 00:19:19.820 [2024-11-22 14:57:34.470918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.820 [2024-11-22 14:57:34.470948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.481895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f1430 00:19:20.079 [2024-11-22 14:57:34.483452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.483497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.494437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f0bc0 00:19:20.079 [2024-11-22 14:57:34.495975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.496005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.506968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f0350 00:19:20.079 [2024-11-22 14:57:34.508502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.508532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.519518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166efae0 00:19:20.079 [2024-11-22 14:57:34.521027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.521057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.532003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ef270 00:19:20.079 [2024-11-22 14:57:34.533512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.533542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.544519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eea00 00:19:20.079 [2024-11-22 14:57:34.545989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.546019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.557012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ee190 00:19:20.079 [2024-11-22 14:57:34.558489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.558519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.569500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ed920 00:19:20.079 [2024-11-22 14:57:34.570940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.570969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.581996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ed0b0 00:19:20.079 [2024-11-22 14:57:34.583434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.583463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.594475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ec840 00:19:20.079 [2024-11-22 14:57:34.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.595923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.606995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ebfd0 00:19:20.079 [2024-11-22 14:57:34.608497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.608526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.619624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eb760 00:19:20.079 [2024-11-22 14:57:34.621006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.621036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.632169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eaef0 00:19:20.079 [2024-11-22 14:57:34.633551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.633580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.644733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ea680 00:19:20.079 [2024-11-22 14:57:34.646083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.646114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.657212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e9e10 00:19:20.079 [2024-11-22 14:57:34.658562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.658592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.669713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e95a0 00:19:20.079 [2024-11-22 14:57:34.671035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.671065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.682269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e8d30 00:19:20.079 [2024-11-22 14:57:34.683625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.683655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.695127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e84c0 00:19:20.079 [2024-11-22 14:57:34.696541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.696574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.709539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e7c50 00:19:20.079 [2024-11-22 14:57:34.710865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.710899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.722736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e73e0 00:19:20.079 [2024-11-22 14:57:34.724074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.724107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:20.079 [2024-11-22 14:57:34.735758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e6b70 00:19:20.079 [2024-11-22 14:57:34.737062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.079 [2024-11-22 14:57:34.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.748463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e6300 00:19:20.338 [2024-11-22 14:57:34.749695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.338 [2024-11-22 14:57:34.749726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.761039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e5a90 00:19:20.338 [2024-11-22 14:57:34.762258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.338 [2024-11-22 14:57:34.762289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.774265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e5220 00:19:20.338 [2024-11-22 14:57:34.775581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.338 [2024-11-22 14:57:34.775613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.787162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e49b0 00:19:20.338 [2024-11-22 14:57:34.788407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.338 [2024-11-22 14:57:34.788438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.799821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e4140 00:19:20.338 [2024-11-22 14:57:34.800998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.338 [2024-11-22 14:57:34.801029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:20.338 [2024-11-22 14:57:34.812424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e38d0 00:19:20.338 [2024-11-22 14:57:34.813582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.813612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.825496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e3060 00:19:20.339 [2024-11-22 14:57:34.826749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.826779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.838779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e27f0 00:19:20.339 [2024-11-22 14:57:34.840028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.840059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.851802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e1f80 00:19:20.339 [2024-11-22 14:57:34.852926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.852956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.864311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e1710 00:19:20.339 [2024-11-22 14:57:34.865425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.865455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.876921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e0ea0 00:19:20.339 [2024-11-22 14:57:34.878059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.878089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.889542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e0630 00:19:20.339 [2024-11-22 14:57:34.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.890664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.902063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166dfdc0 00:19:20.339 [2024-11-22 14:57:34.903140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.903170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.914749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166df550 00:19:20.339 [2024-11-22 14:57:34.915821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.915852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:20.339 19736.00 IOPS, 77.09 MiB/s [2024-11-22T14:57:35.004Z] [2024-11-22 14:57:34.927626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166dece0 00:19:20.339 [2024-11-22 14:57:34.928653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.928682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.940280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166de470 00:19:20.339 [2024-11-22 14:57:34.941294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.958379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ddc00 00:19:20.339 [2024-11-22 14:57:34.960389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.960420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.971075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166de470 00:19:20.339 [2024-11-22 14:57:34.973037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.973067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.984137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166dece0 00:19:20.339 [2024-11-22 14:57:34.986152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.986182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:20.339 [2024-11-22 14:57:34.996846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166df550 00:19:20.339 [2024-11-22 14:57:34.998746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.339 [2024-11-22 14:57:34.998776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:20.598 [2024-11-22 14:57:35.009401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166dfdc0 00:19:20.598 [2024-11-22 14:57:35.011276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.598 [2024-11-22 14:57:35.011307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:20.598 [2024-11-22 14:57:35.022016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e0630 00:19:20.598 [2024-11-22 14:57:35.023900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.598 [2024-11-22 14:57:35.023930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:20.598 [2024-11-22 14:57:35.034512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e0ea0 00:19:20.598 [2024-11-22 14:57:35.036382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.598 [2024-11-22 14:57:35.036410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:20.598 [2024-11-22 14:57:35.046998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e1710 00:19:20.598 [2024-11-22 14:57:35.048903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.598 [2024-11-22 14:57:35.048933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:20.598 [2024-11-22 14:57:35.059574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e1f80 00:19:20.598 [2024-11-22 14:57:35.061402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.061433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.072326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e27f0 00:19:20.599 [2024-11-22 14:57:35.074162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.074192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.084910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e3060 00:19:20.599 [2024-11-22 14:57:35.086714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.086744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.097436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e38d0 00:19:20.599 [2024-11-22 14:57:35.099224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.109986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e4140 00:19:20.599 [2024-11-22 14:57:35.111767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.111799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.122482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e49b0 00:19:20.599 [2024-11-22 14:57:35.124234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.135000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e5220 00:19:20.599 [2024-11-22 14:57:35.136756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.136786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.147522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e5a90 00:19:20.599 [2024-11-22 14:57:35.149236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.149266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.160009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e6300 00:19:20.599 [2024-11-22 14:57:35.161728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.161756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.172528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e6b70 00:19:20.599 [2024-11-22 14:57:35.174214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.174244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.185025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e73e0 00:19:20.599 [2024-11-22 14:57:35.186708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.186738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.197612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e7c50 00:19:20.599 [2024-11-22 14:57:35.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.199297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.210104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e84c0 00:19:20.599 [2024-11-22 14:57:35.211768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.222603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e8d30 00:19:20.599 [2024-11-22 14:57:35.224241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.224272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.235135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e95a0 00:19:20.599 [2024-11-22 14:57:35.236780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.236810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.247664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166e9e10 00:19:20.599 [2024-11-22 14:57:35.249257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.599 [2024-11-22 14:57:35.249287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:20.599 [2024-11-22 14:57:35.260134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ea680 00:19:20.858 [2024-11-22 14:57:35.261793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.858 [2024-11-22 14:57:35.261822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:20.858 [2024-11-22 14:57:35.272732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eaef0 00:19:20.858 [2024-11-22 14:57:35.274300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.858 [2024-11-22 14:57:35.274330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:20.858 [2024-11-22 14:57:35.285221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eb760 00:19:20.858 [2024-11-22 14:57:35.286786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.858 [2024-11-22 14:57:35.286815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:20.858 [2024-11-22 14:57:35.297780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ebfd0 00:19:20.858 [2024-11-22 14:57:35.299370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.299411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.310387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ec840 00:19:20.859 [2024-11-22 14:57:35.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.311944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.322885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ed0b0 00:19:20.859 [2024-11-22 14:57:35.324418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.324448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.335512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ed920 00:19:20.859 [2024-11-22 14:57:35.337005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.337035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.348013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ee190 00:19:20.859 [2024-11-22 14:57:35.349527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.360570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166eea00 00:19:20.859 [2024-11-22 14:57:35.362031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.373068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ef270 00:19:20.859 [2024-11-22 14:57:35.374527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.374556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.385568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166efae0 00:19:20.859 [2024-11-22 14:57:35.387003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.387032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.398134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f0350 00:19:20.859 [2024-11-22 14:57:35.399602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.399633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.410677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f0bc0 00:19:20.859 [2024-11-22 14:57:35.412093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.412124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.423172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f1430 00:19:20.859 [2024-11-22 14:57:35.424600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.424629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.435708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f1ca0 00:19:20.859 [2024-11-22 14:57:35.437084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.437114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.448241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f2510 00:19:20.859 [2024-11-22 14:57:35.449615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.449645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.460743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f2d80 00:19:20.859 [2024-11-22 14:57:35.462091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.462121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.473302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f35f0 00:19:20.859 [2024-11-22 14:57:35.474698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.474727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.485856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f3e60 00:19:20.859 [2024-11-22 14:57:35.487169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.487199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.498340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f46d0 00:19:20.859 [2024-11-22 14:57:35.499660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.499690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:20.859 [2024-11-22 14:57:35.510968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f4f40 00:19:20.859 [2024-11-22 14:57:35.512265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.859 [2024-11-22 14:57:35.512295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.523498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f57b0 00:19:21.118 [2024-11-22 14:57:35.524769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.524799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.535982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f6020 00:19:21.118 [2024-11-22 14:57:35.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.537267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.548479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f6890 00:19:21.118 [2024-11-22 14:57:35.549720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.549750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.560979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7100 00:19:21.118 [2024-11-22 14:57:35.562208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.562238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.573497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f7970 00:19:21.118 [2024-11-22 14:57:35.574710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.574740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.586007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f81e0 00:19:21.118 [2024-11-22 14:57:35.587206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.598493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f8a50 00:19:21.118 [2024-11-22 14:57:35.599687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.599717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.610971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f92c0 00:19:21.118 [2024-11-22 14:57:35.612151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.118 [2024-11-22 14:57:35.612180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.118 [2024-11-22 14:57:35.623489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f9b30 00:19:21.119 [2024-11-22 14:57:35.624642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.624671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.635974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fa3a0 00:19:21.119 [2024-11-22 14:57:35.637115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.637145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.648492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fac10 00:19:21.119 [2024-11-22 14:57:35.649668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.649697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.661036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fb480 00:19:21.119 [2024-11-22 14:57:35.662144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.673531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fbcf0 00:19:21.119 [2024-11-22 14:57:35.674623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.674652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.686082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fc560 00:19:21.119 [2024-11-22 14:57:35.687162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.687191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.698589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fcdd0 00:19:21.119 [2024-11-22 14:57:35.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.699696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.711090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fd640 00:19:21.119 [2024-11-22 14:57:35.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.712184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.723668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fdeb0 00:19:21.119 [2024-11-22 14:57:35.724705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.724734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.736188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fe720 00:19:21.119 [2024-11-22 14:57:35.737213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.737243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.749408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ff3c8 00:19:21.119 [2024-11-22 14:57:35.750470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.750500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.767367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166ff3c8 00:19:21.119 [2024-11-22 14:57:35.769317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.119 [2024-11-22 14:57:35.769347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.119 [2024-11-22 14:57:35.779897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fe720 00:19:21.378 [2024-11-22 14:57:35.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.781855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.792491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fdeb0 00:19:21.378 [2024-11-22 14:57:35.794413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.805028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fd640 00:19:21.378 [2024-11-22 14:57:35.806921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.806950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.817543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fcdd0 00:19:21.378 [2024-11-22 14:57:35.819424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.819454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.830077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fc560 00:19:21.378 [2024-11-22 14:57:35.831956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.831986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.842613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fbcf0 00:19:21.378 [2024-11-22 14:57:35.844558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.844590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.855555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fb480 00:19:21.378 [2024-11-22 14:57:35.857621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.857653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.868359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fac10 00:19:21.378 [2024-11-22 14:57:35.870181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.870210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.880884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166fa3a0 00:19:21.378 [2024-11-22 14:57:35.882690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.882718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.893367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f9b30 00:19:21.378 [2024-11-22 14:57:35.895170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.895201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.905941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f92c0 00:19:21.378 [2024-11-22 14:57:35.907744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.907774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.378 [2024-11-22 14:57:35.918589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d85b0) with pdu=0x2000166f8a50 00:19:21.378 [2024-11-22 14:57:35.920420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.378 [2024-11-22 14:57:35.920449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.378 19925.00 IOPS, 77.83 MiB/s 00:19:21.378 Latency(us) 00:19:21.378 [2024-11-22T14:57:36.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.378 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:21.378 nvme0n1 : 2.00 19961.88 77.98 0.00 0.00 6407.38 5868.45 24188.74 00:19:21.378 [2024-11-22T14:57:36.043Z] =================================================================================================================== 00:19:21.378 [2024-11-22T14:57:36.043Z] Total : 19961.88 77.98 0.00 0.00 6407.38 5868.45 24188.74 00:19:21.378 { 00:19:21.378 "results": [ 00:19:21.378 { 00:19:21.378 "job": "nvme0n1", 00:19:21.378 "core_mask": "0x2", 00:19:21.378 "workload": "randwrite", 00:19:21.378 "status": "finished", 00:19:21.378 "queue_depth": 128, 00:19:21.378 "io_size": 4096, 00:19:21.378 "runtime": 2.002717, 00:19:21.378 "iops": 19961.881783596982, 00:19:21.378 "mibps": 77.97610071717571, 00:19:21.378 "io_failed": 0, 00:19:21.378 "io_timeout": 0, 00:19:21.379 "avg_latency_us": 6407.379884390961, 00:19:21.379 "min_latency_us": 5868.450909090909, 00:19:21.379 "max_latency_us": 24188.741818181818 00:19:21.379 } 00:19:21.379 ], 00:19:21.379 "core_count": 1 00:19:21.379 } 00:19:21.379 14:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:21.379 14:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:21.379 14:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:21.379 | .driver_specific 00:19:21.379 | .nvme_error 00:19:21.379 | .status_code 00:19:21.379 | .command_transient_transport_error' 00:19:21.379 14:57:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80646 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80646 ']' 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80646 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80646 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.637 killing process with pid 80646 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80646' 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80646 00:19:21.637 Received shutdown signal, test time was about 2.000000 seconds 00:19:21.637 00:19:21.637 Latency(us) 00:19:21.637 [2024-11-22T14:57:36.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.637 [2024-11-22T14:57:36.302Z] =================================================================================================================== 00:19:21.637 [2024-11-22T14:57:36.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.637 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80646 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80693 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80693 /var/tmp/bperf.sock 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80693 ']' 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.895 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:21.895 Zero copy mechanism will not be used. 00:19:21.895 [2024-11-22 14:57:36.538532] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:21.895 [2024-11-22 14:57:36.538618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80693 ] 00:19:22.153 [2024-11-22 14:57:36.677739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.153 [2024-11-22 14:57:36.725480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.153 [2024-11-22 14:57:36.796900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.411 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.411 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:22.411 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.411 14:57:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.411 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:22.411 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.411 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:22.668 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.668 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:22.668 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:22.927 nvme0n1 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:22.927 14:57:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:22.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:22.927 Zero copy mechanism will not be used. 00:19:22.927 Running I/O for 2 seconds... 00:19:22.927 [2024-11-22 14:57:37.522223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.522368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.526816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.526941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.526966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.530984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.531071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.531095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.535107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.535253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.539233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.539356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.539395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.543438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.543585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.547625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.547768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.551853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.551973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.551996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.556057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.556162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.556185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.560220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.560344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.560365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.564402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.564526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.564549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.568589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.568711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.568733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.572755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.572899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:22.927 [2024-11-22 14:57:37.576850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.927 [2024-11-22 14:57:37.576970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.927 [2024-11-22 14:57:37.576992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.928 [2024-11-22 14:57:37.581032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.928 [2024-11-22 14:57:37.581171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.928 [2024-11-22 14:57:37.581192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:22.928 [2024-11-22 14:57:37.585153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:22.928 [2024-11-22 14:57:37.585293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.928 [2024-11-22 14:57:37.585315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.187 [2024-11-22 14:57:37.589296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.187 [2024-11-22 14:57:37.589450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.187 [2024-11-22 14:57:37.589473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.187 [2024-11-22 14:57:37.593466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.187 [2024-11-22 14:57:37.593615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.187 [2024-11-22 14:57:37.593637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.187 [2024-11-22 14:57:37.597540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.187 [2024-11-22 14:57:37.597679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.187 [2024-11-22 14:57:37.597700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.187 [2024-11-22 14:57:37.601709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.187 [2024-11-22 14:57:37.601828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.187 [2024-11-22 14:57:37.601849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.187 [2024-11-22 14:57:37.605822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.605961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.605983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.609966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.610104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.610125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.614194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.614320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.614342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.618359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.618498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.618520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.622574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.622667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.626795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.626888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.626909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.630942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.631063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.631084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.635109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.635232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.635255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.639282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.639424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.639446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.643433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.643575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.643597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.647587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.647710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.647731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.651686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.651770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.651799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.655788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.655917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.655939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.659895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.659991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.660014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.664017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.664139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.668017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.668140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.668162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.672093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.672217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.672239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.676225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.676340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.676362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.680332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.680468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.680491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.684473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.684608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.684630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.688556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.188 [2024-11-22 14:57:37.688648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.188 [2024-11-22 14:57:37.688670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.188 [2024-11-22 14:57:37.692765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.692886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.692909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.696841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.696966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.696988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.700914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.704984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.705108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.705130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.709092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.709212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.709234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.713171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.713295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.713318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.717290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.717428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.717450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.721392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.721516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.721538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.725560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.725653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.725675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.729682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.729811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.733790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.733881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.733903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.737872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.737994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.738015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.741980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.746013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.746144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.746166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.750113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.750219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.750241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.754219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.754354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.754389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.758345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.758452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.758475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.762450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.762572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.762594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.766639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.766717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.766738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.770737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.770858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.770881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.774843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.774963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.774986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.778976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.189 [2024-11-22 14:57:37.779097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.189 [2024-11-22 14:57:37.779118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.189 [2024-11-22 14:57:37.783071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.783199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.783221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.787260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.787396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.787419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.791617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.791713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.791736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.795998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.796119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.800296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.800452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.800476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.804608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.804760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.808924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.809045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.809067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.813256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.813366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.817543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.817651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.817673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.821682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.821796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.821818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.825780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.825901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.825923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.829863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.829985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.830007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.833979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.834102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.834123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.838030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.838149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.838171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.842117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.842238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.842259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.190 [2024-11-22 14:57:37.846237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.190 [2024-11-22 14:57:37.846362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.190 [2024-11-22 14:57:37.846397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.850315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.850450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.850472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.854406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.854528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.854550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.858499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.858625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.858647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.862595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.862687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.862708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.866702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.866824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.866845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.870819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.870941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.874892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.875025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.875047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.878952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.879077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.879099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.883071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.883194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.883217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.887164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.887285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.887307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.891273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.891407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.891429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.895442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.895575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.895597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.899538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.450 [2024-11-22 14:57:37.899659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.450 [2024-11-22 14:57:37.899681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.450 [2024-11-22 14:57:37.903655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.903761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.903783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.907736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.907856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.907878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.911821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.911940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.911962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.915932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.916053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.916074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.920036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.920142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.920163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.924136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.924261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.924282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.928347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.928502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.928525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.932477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.932617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.932639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.936667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.936757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.936778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.940858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.940952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.940973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.944996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.945117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.945140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.949095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.949218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.949239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.953190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.953266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.953288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.957296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.957435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.957457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.961365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.961505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.961527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.965527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.965650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.965671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.969603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.969696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.969718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.973777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.973900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.973923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.977905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.978027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.978049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.982009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.982130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.982151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.986153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.986276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.986298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.990221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.990367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.994331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.994474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.994495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:37.998439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:37.998560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:37.998582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:38.002509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:38.002633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:38.002654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:38.006704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:38.006783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:38.006804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:38.010763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:38.010885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.451 [2024-11-22 14:57:38.010907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.451 [2024-11-22 14:57:38.014895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.451 [2024-11-22 14:57:38.015016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.015038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.019016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.019140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.019162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.023139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.023262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.023284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.027278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.027435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.031323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.031459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.031491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.035429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.035591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.035613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.039596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.039673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.039695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.043733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.043809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.043830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.047845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.047964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.047986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.051933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.052055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.052077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.056059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.056182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.056204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.060173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.060298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.060320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.064305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.064445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.064467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.068455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.068578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.068601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.072590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.072713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.072734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.076779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.076900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.076922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.080907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.081029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.081051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.085046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.085171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.085194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.089110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.089235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.089257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.093161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.093285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.097250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.097365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.097400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.101336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.101469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.101492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.105498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.105625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.105646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.452 [2024-11-22 14:57:38.109642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.452 [2024-11-22 14:57:38.109764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.452 [2024-11-22 14:57:38.109785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.712 [2024-11-22 14:57:38.113776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.712 [2024-11-22 14:57:38.113870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.712 [2024-11-22 14:57:38.113891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.712 [2024-11-22 14:57:38.117918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.712 [2024-11-22 14:57:38.118038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.712 [2024-11-22 14:57:38.118060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.712 [2024-11-22 14:57:38.122035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.712 [2024-11-22 14:57:38.122157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.122180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.126161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.126283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.130221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.130343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.134351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.134472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.138459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.138581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.138603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.142540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.142661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.142682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.146677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.146792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.146813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.150781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.150901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.150923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.154908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.155030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.155052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.159019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.159142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.159165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.163111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.163232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.163254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.167225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.167384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.171366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.171514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.171536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.175544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.175660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.175681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.179721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.179826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.179850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.183851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.183973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.187956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.188080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.188102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.192008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.192132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.192154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.196156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.196289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.196311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.200283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.200422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.200444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.204424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.204513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.204535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.208574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.208697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.208718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.212836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.212960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.212981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.217049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.217141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.221291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.221441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.221467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.225658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.225779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.225801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.229902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.713 [2024-11-22 14:57:38.230021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.713 [2024-11-22 14:57:38.230043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.713 [2024-11-22 14:57:38.234218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.234342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.234364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.238424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.238558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.242667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.242791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.242814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.246906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.246986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.247009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.251174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.251301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.251323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.255450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.255593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.255616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.259752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.259832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.259854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.264003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.264130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.264155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.268287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.268428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.268451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.272608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.272760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.276889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.277017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.277039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.281162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.281302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.281325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.285492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.285620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.285644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.289785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.289873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.289896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.294066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.294145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.294168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.298351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.298484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.302646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.302788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.302810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.307074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.307225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.311583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.311680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.311704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.315951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.316077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.316100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.320280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.320422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.324676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.324776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.328985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.329106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.329128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.333199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.333321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.333350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.337330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.337473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.337495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.341402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.341525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.341554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.345517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.714 [2024-11-22 14:57:38.345639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.714 [2024-11-22 14:57:38.345664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.714 [2024-11-22 14:57:38.349610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.349723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.349745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.715 [2024-11-22 14:57:38.353703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.353823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.715 [2024-11-22 14:57:38.357815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.357938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.357960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.715 [2024-11-22 14:57:38.361927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.362051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.362073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.715 [2024-11-22 14:57:38.366038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.366161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.366183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.715 [2024-11-22 14:57:38.370155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.715 [2024-11-22 14:57:38.370280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.715 [2024-11-22 14:57:38.370308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.374259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.374394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.374417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.378353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.378489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.378525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.382423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.382557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.382585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.386577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.386719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.390720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.390841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.390863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.394817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.394939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.394961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.398917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.399039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.403020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.403162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.407273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.407378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.411384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.411500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.411522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.415518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.415614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.415636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.419680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.419771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.419792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.423784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.423903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.423925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.427851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.427987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.431967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.432069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.432091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.436259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.436383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.975 [2024-11-22 14:57:38.436404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.975 [2024-11-22 14:57:38.440482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.975 [2024-11-22 14:57:38.440620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.440642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.444645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.444767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.444789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.448830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.448951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.448973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.453040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.453163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.453185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.457232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.457396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.461421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.461557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.461578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.465575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.465697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.465719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.469851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.469974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.469995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.474022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.474147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.474169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.478165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.478284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.478305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.482281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.482421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.482443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.486505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.486627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.490719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.490840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.490862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.494927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.495049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.495070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.499101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.499224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.499246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.503350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.503514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.503538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.507783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.507924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.507946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.512124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.512245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.512266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.516256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.516412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 7424.00 IOPS, 928.00 MiB/s [2024-11-22T14:57:38.641Z] [2024-11-22 14:57:38.521668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.521790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.521812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.525802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.525925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.525947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.529950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.530034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.530056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.534080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.534192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.534213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.538286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.538422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.538445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.542415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.542538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.542559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.546578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.546655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.546676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.550691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.550812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.976 [2024-11-22 14:57:38.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.976 [2024-11-22 14:57:38.554810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.976 [2024-11-22 14:57:38.554932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.554953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.558900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.559020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.559042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.563022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.563144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.563165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.567103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.567227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.571221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.571340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.571361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.575256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.575389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.575411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.579349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.579496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.579519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.583443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.583582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.583604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.587583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.587660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.587681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.591707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.591813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.591835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.595767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.595899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.595922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.599889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.600012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.603990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.604111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.604133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.608091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.608211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.608233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.612155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.612275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.612296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.616224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.616336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.616358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.620344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.620462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.620484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.624439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.624561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.624582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.628551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.628673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.628694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.977 [2024-11-22 14:57:38.632649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:23.977 [2024-11-22 14:57:38.632762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.977 [2024-11-22 14:57:38.632784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.237 [2024-11-22 14:57:38.636753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.237 [2024-11-22 14:57:38.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.237 [2024-11-22 14:57:38.636866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.237 [2024-11-22 14:57:38.640853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.237 [2024-11-22 14:57:38.640972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.237 [2024-11-22 14:57:38.640994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.237 [2024-11-22 14:57:38.644941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.237 [2024-11-22 14:57:38.645066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.237 [2024-11-22 14:57:38.645088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.237 [2024-11-22 14:57:38.649013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.237 [2024-11-22 14:57:38.649124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.237 [2024-11-22 14:57:38.649146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.237 [2024-11-22 14:57:38.653064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.237 [2024-11-22 14:57:38.653184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.653207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.657113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.657237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.657258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.661263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.665282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.665442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.669351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.669492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.669514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.673432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.673588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.677516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.677638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.677660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.681589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.681711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.681733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.685693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.685813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.689796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.689917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.689939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.693894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.694017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.694039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.697958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.698080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.698102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.702076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.702188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.702211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.706190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.706350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.706383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.710279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.710444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.714452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.714576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.714597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.718612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.718733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.722736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.722812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.722834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.726886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.727007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.727029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.731056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.731177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.731199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.735120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.735243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.735265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.739297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.739430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.739452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.743381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.743528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.743550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.747559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.747648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.747670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.751743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.751819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.751840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.755853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.755974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.755996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.759979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.760092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.760115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.238 [2024-11-22 14:57:38.764088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.238 [2024-11-22 14:57:38.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.238 [2024-11-22 14:57:38.764234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.768214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.768334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.768356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.772306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.772421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.772443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.776515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.776634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.776656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.780675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.780796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.784803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.784895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.784917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.788996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.789108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.789130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.793066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.793179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.793200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.797237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.797358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.797397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.801433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.801567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.801588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.805581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.805684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.805707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.809925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.810051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.810073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.814214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.814334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.814356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.818587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.818684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.818707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.822988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.823081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.823103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.827333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.827499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.827522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.831598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.831699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.835772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.835862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.835884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.839897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.840035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.840057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.844119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.844244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.844266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.848252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.848388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.848411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.852467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.852600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.852622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.856615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.856749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.856771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.860942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.861064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.861086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.865188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.865318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.865340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.869416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.869552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.869574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.873508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.873630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.873651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.239 [2024-11-22 14:57:38.877631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.239 [2024-11-22 14:57:38.877744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.239 [2024-11-22 14:57:38.877766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.240 [2024-11-22 14:57:38.881785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.240 [2024-11-22 14:57:38.881889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.240 [2024-11-22 14:57:38.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.240 [2024-11-22 14:57:38.885861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.240 [2024-11-22 14:57:38.885984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.240 [2024-11-22 14:57:38.886006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.240 [2024-11-22 14:57:38.889920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.240 [2024-11-22 14:57:38.890044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.240 [2024-11-22 14:57:38.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.240 [2024-11-22 14:57:38.894033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.240 [2024-11-22 14:57:38.894126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.240 [2024-11-22 14:57:38.894147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.500 [2024-11-22 14:57:38.898141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.500 [2024-11-22 14:57:38.898264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.500 [2024-11-22 14:57:38.898286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.500 [2024-11-22 14:57:38.902282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.500 [2024-11-22 14:57:38.902418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.500 [2024-11-22 14:57:38.902440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.500 [2024-11-22 14:57:38.906340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.500 [2024-11-22 14:57:38.906475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.500 [2024-11-22 14:57:38.906497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.500 [2024-11-22 14:57:38.910617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.500 [2024-11-22 14:57:38.910740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.500 [2024-11-22 14:57:38.910762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.914769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.914889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.914911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.918962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.919068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.919089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.923168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.923289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.923311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.927312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.927432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.927454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.931504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.931588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.931610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.935668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.935772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.935794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.939815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.939928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.939951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.944115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.944239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.948251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.948385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.948408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.952414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.952538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.952560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.956571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.956693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.956715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.960719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.960840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.960862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.964912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.965035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.965057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.969048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.969194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.973138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.973261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.973283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.977269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.977407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.977429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.981366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.981499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.981521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.985451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.985572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.989581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.989682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.989704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.993674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.993795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.993817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:38.997767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:38.997868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:38.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.001876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.002007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.002028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.006003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.006095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.006116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.010186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.010324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.010345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.014340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.014501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.014524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.018430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.018574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.018596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.501 [2024-11-22 14:57:39.022537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.501 [2024-11-22 14:57:39.022673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.501 [2024-11-22 14:57:39.022695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.026642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.030750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.030868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.030889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.034848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.035011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.038949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.039090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.039112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.043075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.043217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.043239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.047160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.047303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.047325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.051230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.051381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.051404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.055290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.055454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.059335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.059497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.059519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.063414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.063561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.063582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.067493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.067638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.067659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.071587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.071724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.071746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.075736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.079836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.079958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.079979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.083974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.084113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.084135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.088024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.088162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.092169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.092312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.092333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.096253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.096410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.096432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.100364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.100523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.100544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.104485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.104639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.108641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.108772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.108793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.112855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.112979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.113001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.116967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.117113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.121087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.121211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.125176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.125300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.125322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.129310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.129409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.129431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.502 [2024-11-22 14:57:39.133428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.502 [2024-11-22 14:57:39.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.502 [2024-11-22 14:57:39.133589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.137531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.137653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.137675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.141665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.141787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.141808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.145771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.145895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.145917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.149878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.149980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.150001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.153998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.154121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.154144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.503 [2024-11-22 14:57:39.158115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.503 [2024-11-22 14:57:39.158238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.503 [2024-11-22 14:57:39.158260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.162269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.162398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.162420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.166326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.166464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.166486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.170437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.170563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.174534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.174656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.174678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.178627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.178749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.178771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.182754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.182854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.182875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.186921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.187043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.187064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.191052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.191176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.191197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.195175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.195297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.195319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.199259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.199415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.203400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.203557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.207498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.207624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.207646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.211611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.211735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.211756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.215773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.215865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.219969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.220054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.224100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.224226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.224248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.228205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.228326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.228348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.232308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.232458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.232480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.236390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.236516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.236537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.240466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.240605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.244558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.244680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.244702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.248689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.248809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.248831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.252810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.252934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.252956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.256940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.257071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.257093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.261050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.261174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.261196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.265137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.265260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.265282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.269251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.269387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.269409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.273399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.273543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.273564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.277514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.277637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.277658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.281639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.281762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.281783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.285747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.285872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.285895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.289853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.289977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.289999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.294015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.294137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.294160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.298135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.298260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.302292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.302428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.302450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.306420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.306565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.310542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.310664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.310686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.314758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.314879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.314901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.318886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.319007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.319029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.323104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.323222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.323243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.327289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.327430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.327454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.331540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.331663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.331685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.769 [2024-11-22 14:57:39.335693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.769 [2024-11-22 14:57:39.335815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.769 [2024-11-22 14:57:39.335838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.339785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.339907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.339929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.343916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.344037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.344059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.348033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.348154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.348176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.352121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.352234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.352256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.356212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.356344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.356367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.360285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.360429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.360453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.364434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.364556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.364577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.368554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.368661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.368683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.372644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.372766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.372788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.376767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.376888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.376910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.380854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.380976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.380998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.384958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.385079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.385100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.389054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.389177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.389199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.393148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.393273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.397288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.397419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.397441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.401450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.401573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.401595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.405545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.405646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.405668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.409607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.409729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.409750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.413722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.413846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.413868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.417756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.417877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.417899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.421899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.422022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.422044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:24.770 [2024-11-22 14:57:39.425999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:24.770 [2024-11-22 14:57:39.426120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.770 [2024-11-22 14:57:39.426141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.028 [2024-11-22 14:57:39.430176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.028 [2024-11-22 14:57:39.430298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.028 [2024-11-22 14:57:39.430321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.028 [2024-11-22 14:57:39.434426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.028 [2024-11-22 14:57:39.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.028 [2024-11-22 14:57:39.434583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.028 [2024-11-22 14:57:39.438512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.028 [2024-11-22 14:57:39.438632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.438654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.442665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.442759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.442780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.446782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.446903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.446925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.450882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.450999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.451020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.454993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.455113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.455135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.459084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.459230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.463290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.463411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.463433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.467581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.467694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.471727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.471822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.471844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.475898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.476036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.479980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.480103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.480125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.484079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.484203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.484242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.488185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.488308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.488329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.492282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.492418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.492441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.496395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.496518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.496540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.500550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.500628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.500649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.504749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.504874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.509024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.509153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.509175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.513267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.513426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.513449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.029 [2024-11-22 14:57:39.517615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20d88f0) with pdu=0x2000166ff3c8 00:19:25.029 [2024-11-22 14:57:39.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.029 [2024-11-22 14:57:39.517739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.029 7457.50 IOPS, 932.19 MiB/s 00:19:25.029 Latency(us) 00:19:25.029 [2024-11-22T14:57:39.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:25.029 nvme0n1 : 2.00 7455.79 931.97 0.00 0.00 2140.44 1452.22 6762.12 00:19:25.029 [2024-11-22T14:57:39.694Z] =================================================================================================================== 00:19:25.029 [2024-11-22T14:57:39.694Z] Total : 7455.79 931.97 0.00 0.00 2140.44 1452.22 6762.12 00:19:25.029 { 00:19:25.029 "results": [ 00:19:25.029 { 00:19:25.029 "job": "nvme0n1", 00:19:25.029 "core_mask": "0x2", 00:19:25.029 "workload": "randwrite", 00:19:25.029 "status": "finished", 00:19:25.029 "queue_depth": 16, 00:19:25.029 "io_size": 131072, 00:19:25.029 "runtime": 2.003677, 00:19:25.029 "iops": 7455.792525441975, 00:19:25.029 "mibps": 931.9740656802469, 00:19:25.029 "io_failed": 0, 00:19:25.029 "io_timeout": 0, 00:19:25.029 "avg_latency_us": 2140.4430925764777, 00:19:25.029 "min_latency_us": 1452.2181818181818, 00:19:25.029 "max_latency_us": 6762.123636363636 00:19:25.029 } 00:19:25.029 ], 00:19:25.029 "core_count": 1 00:19:25.029 } 00:19:25.029 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:25.029 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:25.029 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:25.029 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:25.029 | .driver_specific 00:19:25.029 | .nvme_error 00:19:25.029 | .status_code 00:19:25.029 | .command_transient_transport_error' 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 482 > 0 )) 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80693 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80693 ']' 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80693 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80693 00:19:25.288 killing process with pid 80693 00:19:25.288 Received shutdown signal, test time was about 2.000000 seconds 00:19:25.288 00:19:25.288 Latency(us) 00:19:25.288 [2024-11-22T14:57:39.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.288 [2024-11-22T14:57:39.953Z] =================================================================================================================== 00:19:25.288 [2024-11-22T14:57:39.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80693' 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80693 00:19:25.288 14:57:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80693 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80514 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80514 ']' 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80514 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.546 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80514 00:19:25.546 killing process with pid 80514 00:19:25.547 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.547 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.547 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80514' 00:19:25.547 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80514 00:19:25.547 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80514 00:19:25.805 ************************************ 00:19:25.805 END TEST nvmf_digest_error 00:19:25.805 ************************************ 00:19:25.805 00:19:25.805 real 0m15.772s 00:19:25.805 user 0m30.187s 00:19:25.805 sys 0m4.620s 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.805 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:26.063 rmmod nvme_tcp 00:19:26.063 rmmod nvme_fabrics 00:19:26.063 rmmod nvme_keyring 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:26.063 Process with pid 80514 is not found 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80514 ']' 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80514 00:19:26.063 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80514 ']' 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80514 00:19:26.064 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80514) - No such process 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80514 is not found' 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.064 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:26.323 00:19:26.323 real 0m33.903s 00:19:26.323 user 1m3.357s 00:19:26.323 sys 0m10.210s 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:26.323 ************************************ 00:19:26.323 END TEST nvmf_digest 00:19:26.323 ************************************ 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.323 ************************************ 00:19:26.323 START TEST nvmf_host_multipath 00:19:26.323 ************************************ 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:26.323 * Looking for test storage... 00:19:26.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.323 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.582 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:26.582 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:26.582 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.582 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:26.582 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.583 --rc genhtml_branch_coverage=1 00:19:26.583 --rc genhtml_function_coverage=1 00:19:26.583 --rc genhtml_legend=1 00:19:26.583 --rc geninfo_all_blocks=1 00:19:26.583 --rc geninfo_unexecuted_blocks=1 00:19:26.583 00:19:26.583 ' 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.583 --rc genhtml_branch_coverage=1 00:19:26.583 --rc genhtml_function_coverage=1 00:19:26.583 --rc genhtml_legend=1 00:19:26.583 --rc geninfo_all_blocks=1 00:19:26.583 --rc geninfo_unexecuted_blocks=1 00:19:26.583 00:19:26.583 ' 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.583 --rc genhtml_branch_coverage=1 00:19:26.583 --rc genhtml_function_coverage=1 00:19:26.583 --rc genhtml_legend=1 00:19:26.583 --rc geninfo_all_blocks=1 00:19:26.583 --rc geninfo_unexecuted_blocks=1 00:19:26.583 00:19:26.583 ' 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.583 --rc genhtml_branch_coverage=1 00:19:26.583 --rc genhtml_function_coverage=1 00:19:26.583 --rc genhtml_legend=1 00:19:26.583 --rc geninfo_all_blocks=1 00:19:26.583 --rc geninfo_unexecuted_blocks=1 00:19:26.583 00:19:26.583 ' 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.583 14:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.583 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.583 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.584 Cannot find device "nvmf_init_br" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.584 Cannot find device "nvmf_init_br2" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.584 Cannot find device "nvmf_tgt_br" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.584 Cannot find device "nvmf_tgt_br2" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.584 Cannot find device "nvmf_init_br" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.584 Cannot find device "nvmf_init_br2" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.584 Cannot find device "nvmf_tgt_br" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.584 Cannot find device "nvmf_tgt_br2" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.584 Cannot find device "nvmf_br" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.584 Cannot find device "nvmf_init_if" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.584 Cannot find device "nvmf_init_if2" 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.584 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:26.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:26.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:26.843 00:19:26.843 --- 10.0.0.3 ping statistics --- 00:19:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.843 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:26.843 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:26.843 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:19:26.843 00:19:26.843 --- 10.0.0.4 ping statistics --- 00:19:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.843 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:26.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:26.843 00:19:26.843 --- 10.0.0.1 ping statistics --- 00:19:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.843 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:26.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:26.843 00:19:26.843 --- 10.0.0.2 ping statistics --- 00:19:26.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.843 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81010 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81010 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81010 ']' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.843 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:26.843 [2024-11-22 14:57:41.491448] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:19:26.843 [2024-11-22 14:57:41.491539] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.101 [2024-11-22 14:57:41.640721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:27.101 [2024-11-22 14:57:41.700511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.101 [2024-11-22 14:57:41.700875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.101 [2024-11-22 14:57:41.701121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.101 [2024-11-22 14:57:41.701289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.101 [2024-11-22 14:57:41.701340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.101 [2024-11-22 14:57:41.703001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.101 [2024-11-22 14:57:41.703021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.360 [2024-11-22 14:57:41.780832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81010 00:19:27.360 14:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:27.619 [2024-11-22 14:57:42.190366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.619 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:27.878 Malloc0 00:19:27.878 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:28.137 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.396 14:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:28.654 [2024-11-22 14:57:43.195978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.654 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:28.913 [2024-11-22 14:57:43.408082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81058 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81058 /var/tmp/bdevperf.sock 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81058 ']' 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.913 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:29.172 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.172 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:29.172 14:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:29.431 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:29.690 Nvme0n1 00:19:29.690 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:30.256 Nvme0n1 00:19:30.256 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:30.256 14:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.192 14:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:31.192 14:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:31.450 14:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:31.709 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:31.709 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:31.709 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81096 00:19:31.709 14:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:38.273 Attaching 4 probes... 00:19:38.273 @path[10.0.0.3, 4421]: 20350 00:19:38.273 @path[10.0.0.3, 4421]: 20494 00:19:38.273 @path[10.0.0.3, 4421]: 20374 00:19:38.273 @path[10.0.0.3, 4421]: 20189 00:19:38.273 @path[10.0.0.3, 4421]: 20243 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81096 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:38.273 14:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:38.542 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:38.542 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81209 00:19:38.542 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:38.542 14:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.152 Attaching 4 probes... 00:19:45.152 @path[10.0.0.3, 4420]: 21382 00:19:45.152 @path[10.0.0.3, 4420]: 21684 00:19:45.152 @path[10.0.0.3, 4420]: 21752 00:19:45.152 @path[10.0.0.3, 4420]: 21688 00:19:45.152 @path[10.0.0.3, 4420]: 21844 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81209 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:45.152 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:45.411 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:45.411 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81327 00:19:45.411 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:45.411 14:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:51.977 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:51.977 14:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.977 Attaching 4 probes... 00:19:51.977 @path[10.0.0.3, 4421]: 14955 00:19:51.977 @path[10.0.0.3, 4421]: 19612 00:19:51.977 @path[10.0.0.3, 4421]: 19700 00:19:51.977 @path[10.0.0.3, 4421]: 19920 00:19:51.977 @path[10.0.0.3, 4421]: 19835 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81327 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:51.977 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:52.234 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:52.234 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81440 00:19:52.234 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:52.234 14:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.795 Attaching 4 probes... 00:19:58.795 00:19:58.795 00:19:58.795 00:19:58.795 00:19:58.795 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81440 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:58.795 14:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:58.795 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:59.053 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:59.053 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81552 00:19:59.053 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:59.053 14:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:05.617 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:05.617 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:05.617 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:05.617 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.617 Attaching 4 probes... 00:20:05.618 @path[10.0.0.3, 4421]: 19652 00:20:05.618 @path[10.0.0.3, 4421]: 19304 00:20:05.618 @path[10.0.0.3, 4421]: 19529 00:20:05.618 @path[10.0.0.3, 4421]: 19786 00:20:05.618 @path[10.0.0.3, 4421]: 19661 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81552 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.618 14:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:05.618 14:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:06.555 14:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:06.555 14:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81676 00:20:06.555 14:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:06.555 14:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.123 Attaching 4 probes... 00:20:13.123 @path[10.0.0.3, 4420]: 20304 00:20:13.123 @path[10.0.0.3, 4420]: 20790 00:20:13.123 @path[10.0.0.3, 4420]: 21012 00:20:13.123 @path[10.0.0.3, 4420]: 20504 00:20:13.123 @path[10.0.0.3, 4420]: 20160 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81676 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:13.123 [2024-11-22 14:58:27.605762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:13.123 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:13.382 14:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:19.946 14:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:19.946 14:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81850 00:20:19.946 14:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:19.946 14:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81010 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:25.294 14:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:25.294 14:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.553 Attaching 4 probes... 00:20:25.553 @path[10.0.0.3, 4421]: 19672 00:20:25.553 @path[10.0.0.3, 4421]: 19976 00:20:25.553 @path[10.0.0.3, 4421]: 19667 00:20:25.553 @path[10.0.0.3, 4421]: 19448 00:20:25.553 @path[10.0.0.3, 4421]: 19040 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81850 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81058 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81058 ']' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81058 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81058 00:20:25.553 killing process with pid 81058 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81058' 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81058 00:20:25.553 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81058 00:20:25.553 { 00:20:25.553 "results": [ 00:20:25.553 { 00:20:25.553 "job": "Nvme0n1", 00:20:25.553 "core_mask": "0x4", 00:20:25.553 "workload": "verify", 00:20:25.553 "status": "terminated", 00:20:25.553 "verify_range": { 00:20:25.553 "start": 0, 00:20:25.553 "length": 16384 00:20:25.553 }, 00:20:25.553 "queue_depth": 128, 00:20:25.553 "io_size": 4096, 00:20:25.553 "runtime": 55.37917, 00:20:25.553 "iops": 8656.991428365574, 00:20:25.553 "mibps": 33.816372767053025, 00:20:25.553 "io_failed": 0, 00:20:25.553 "io_timeout": 0, 00:20:25.553 "avg_latency_us": 14755.629298509723, 00:20:25.553 "min_latency_us": 1310.72, 00:20:25.553 "max_latency_us": 7015926.69090909 00:20:25.553 } 00:20:25.553 ], 00:20:25.553 "core_count": 1 00:20:25.553 } 00:20:25.824 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81058 00:20:25.824 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:25.824 [2024-11-22 14:57:43.473960] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:20:25.824 [2024-11-22 14:57:43.474062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81058 ] 00:20:25.824 [2024-11-22 14:57:43.620879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.824 [2024-11-22 14:57:43.677645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.824 [2024-11-22 14:57:43.734223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:25.824 Running I/O for 90 seconds... 00:20:25.824 10951.00 IOPS, 42.78 MiB/s [2024-11-22T14:58:40.489Z] 10863.50 IOPS, 42.44 MiB/s [2024-11-22T14:58:40.489Z] 10723.00 IOPS, 41.89 MiB/s [2024-11-22T14:58:40.489Z] 10602.00 IOPS, 41.41 MiB/s [2024-11-22T14:58:40.489Z] 10523.20 IOPS, 41.11 MiB/s [2024-11-22T14:58:40.489Z] 10453.33 IOPS, 40.83 MiB/s [2024-11-22T14:58:40.489Z] 10405.71 IOPS, 40.65 MiB/s [2024-11-22T14:58:40.489Z] 10358.00 IOPS, 40.46 MiB/s [2024-11-22T14:58:40.489Z] [2024-11-22 14:57:52.995544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.995970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.995987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.824 [2024-11-22 14:57:52.996335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.824 [2024-11-22 14:57:52.996349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.996675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.996981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.996994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.997648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.997973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.997986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.825 [2024-11-22 14:57:52.998194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.825 [2024-11-22 14:57:52.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.825 [2024-11-22 14:57:52.998250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.998487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.998984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.998998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.826 [2024-11-22 14:57:52.999042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.826 [2024-11-22 14:57:52.999278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.826 [2024-11-22 14:57:52.999297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:52.999309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:52.999936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:52.999950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.827 [2024-11-22 14:57:53.001442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:53.001723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:53.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.827 10371.67 IOPS, 40.51 MiB/s [2024-11-22T14:58:40.492Z] 10422.50 IOPS, 40.71 MiB/s [2024-11-22T14:58:40.492Z] 10464.09 IOPS, 40.88 MiB/s [2024-11-22T14:58:40.492Z] 10498.75 IOPS, 41.01 MiB/s [2024-11-22T14:58:40.492Z] 10525.62 IOPS, 41.12 MiB/s [2024-11-22T14:58:40.492Z] 10544.07 IOPS, 41.19 MiB/s [2024-11-22T14:58:40.492Z] [2024-11-22 14:57:59.510059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.827 [2024-11-22 14:57:59.510417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.827 [2024-11-22 14:57:59.510431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.510939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.510977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.511237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.511961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.511974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.828 [2024-11-22 14:57:59.512355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.828 [2024-11-22 14:57:59.512428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.828 [2024-11-22 14:57:59.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.512956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.512975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.512988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.513732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.513964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.513978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.829 [2024-11-22 14:57:59.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.514935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.514994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.515012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.515037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.515051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.515076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.829 [2024-11-22 14:57:59.515095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.829 [2024-11-22 14:57:59.515120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:57:59.515174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:57:59.515213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:57:59.515251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:57:59.515301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:57:59.515343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:57:59.515359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.830 10330.73 IOPS, 40.35 MiB/s [2024-11-22T14:58:40.495Z] 9874.62 IOPS, 38.57 MiB/s [2024-11-22T14:58:40.495Z] 9873.53 IOPS, 38.57 MiB/s [2024-11-22T14:58:40.495Z] 9870.33 IOPS, 38.56 MiB/s [2024-11-22T14:58:40.495Z] 9870.84 IOPS, 38.56 MiB/s [2024-11-22T14:58:40.495Z] 9872.90 IOPS, 38.57 MiB/s [2024-11-22T14:58:40.495Z] 9874.38 IOPS, 38.57 MiB/s [2024-11-22T14:58:40.495Z] [2024-11-22 14:58:06.658714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.658838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.658877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.658908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.658939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.658969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.658982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.830 [2024-11-22 14:58:06.659043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.830 [2024-11-22 14:58:06.659780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.830 [2024-11-22 14:58:06.659800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.831 [2024-11-22 14:58:06.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.659847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.831 [2024-11-22 14:58:06.659860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.659895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.831 [2024-11-22 14:58:06.659908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.831 [2024-11-22 14:58:06.659939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.659961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.659975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.659994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.831 [2024-11-22 14:58:06.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.831 [2024-11-22 14:58:06.660198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.660710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.660973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.660986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.832 [2024-11-22 14:58:06.661821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.661977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.661995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.662008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.662026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.662045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.662064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.662078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.662097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.662110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.662129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.832 [2024-11-22 14:58:06.662142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.832 [2024-11-22 14:58:06.662161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.662330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.662596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:06.663235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.663971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.663985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:06.664012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:06.664027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.833 9802.27 IOPS, 38.29 MiB/s [2024-11-22T14:58:40.498Z] 9376.09 IOPS, 36.63 MiB/s [2024-11-22T14:58:40.498Z] 8985.42 IOPS, 35.10 MiB/s [2024-11-22T14:58:40.498Z] 8626.00 IOPS, 33.70 MiB/s [2024-11-22T14:58:40.498Z] 8294.23 IOPS, 32.40 MiB/s [2024-11-22T14:58:40.498Z] 7987.04 IOPS, 31.20 MiB/s [2024-11-22T14:58:40.498Z] 7701.79 IOPS, 30.09 MiB/s [2024-11-22T14:58:40.498Z] 7490.52 IOPS, 29.26 MiB/s [2024-11-22T14:58:40.498Z] 7569.90 IOPS, 29.57 MiB/s [2024-11-22T14:58:40.498Z] 7639.00 IOPS, 29.84 MiB/s [2024-11-22T14:58:40.498Z] 7703.66 IOPS, 30.09 MiB/s [2024-11-22T14:58:40.498Z] 7770.09 IOPS, 30.35 MiB/s [2024-11-22T14:58:40.498Z] 7832.62 IOPS, 30.60 MiB/s [2024-11-22T14:58:40.498Z] 7884.49 IOPS, 30.80 MiB/s [2024-11-22T14:58:40.498Z] [2024-11-22 14:58:20.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.038885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.038908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.038956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.038979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.038991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.039003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.039015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.039026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.039038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.039050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.039063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.833 [2024-11-22 14:58:20.039074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.039086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.039097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.833 [2024-11-22 14:58:20.039110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.833 [2024-11-22 14:58:20.039121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.834 [2024-11-22 14:58:20.039822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.039973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.834 [2024-11-22 14:58:20.040257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.834 [2024-11-22 14:58:20.040270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.835 [2024-11-22 14:58:20.040438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.835 [2024-11-22 14:58:20.040451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.040475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.040913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.040936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.040960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.040978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.040998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.041023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.041065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.041090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.041115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.836 [2024-11-22 14:58:20.041140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.836 [2024-11-22 14:58:20.041441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.836 [2024-11-22 14:58:20.041455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.837 [2024-11-22 14:58:20.041467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.837 [2024-11-22 14:58:20.041492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.837 [2024-11-22 14:58:20.041516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.837 [2024-11-22 14:58:20.041541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.837 [2024-11-22 14:58:20.041565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.041755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b290 is same with the state(6) to be set 00:20:25.837 [2024-11-22 14:58:20.041781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.041790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.041799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127520 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.041811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.041832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.041841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128168 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.041852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.041872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.041881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128176 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.041892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.041912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.041921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128184 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.041932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.041951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.041960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128192 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.041976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.041994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.042003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128200 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.042025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.042037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.042045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.042054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128208 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.042065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.042076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.042084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.042093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128216 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.042104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.042116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:25.837 [2024-11-22 14:58:20.042124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:25.837 [2024-11-22 14:58:20.042133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128224 len:8 PRP1 0x0 PRP2 0x0 00:20:25.837 [2024-11-22 14:58:20.042144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.043324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:25.837 [2024-11-22 14:58:20.043406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.837 [2024-11-22 14:58:20.043445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.837 [2024-11-22 14:58:20.043513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ac1d0 (9): Bad file descriptor 00:20:25.839 [2024-11-22 14:58:20.044001] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:25.839 [2024-11-22 14:58:20.044034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ac1d0 with addr=10.0.0.3, port=4421 00:20:25.839 [2024-11-22 14:58:20.044051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ac1d0 is same with the state(6) to be set 00:20:25.839 [2024-11-22 14:58:20.044123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ac1d0 (9): Bad file descriptor 00:20:25.839 [2024-11-22 14:58:20.044158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:25.839 [2024-11-22 14:58:20.044175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:25.839 [2024-11-22 14:58:20.044188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:25.839 [2024-11-22 14:58:20.044201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:25.839 [2024-11-22 14:58:20.044214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:25.839 7952.17 IOPS, 31.06 MiB/s [2024-11-22T14:58:40.504Z] 8017.46 IOPS, 31.32 MiB/s [2024-11-22T14:58:40.504Z] 8074.89 IOPS, 31.54 MiB/s [2024-11-22T14:58:40.504Z] 8137.18 IOPS, 31.79 MiB/s [2024-11-22T14:58:40.504Z] 8197.35 IOPS, 32.02 MiB/s [2024-11-22T14:58:40.504Z] 8243.07 IOPS, 32.20 MiB/s [2024-11-22T14:58:40.504Z] 8290.62 IOPS, 32.39 MiB/s [2024-11-22T14:58:40.504Z] 8339.30 IOPS, 32.58 MiB/s [2024-11-22T14:58:40.504Z] 8389.59 IOPS, 32.77 MiB/s [2024-11-22T14:58:40.504Z] 8431.07 IOPS, 32.93 MiB/s [2024-11-22T14:58:40.504Z] [2024-11-22 14:58:30.090417] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:25.839 8438.24 IOPS, 32.96 MiB/s [2024-11-22T14:58:40.504Z] 8460.45 IOPS, 33.05 MiB/s [2024-11-22T14:58:40.504Z] 8480.02 IOPS, 33.13 MiB/s [2024-11-22T14:58:40.504Z] 8508.59 IOPS, 33.24 MiB/s [2024-11-22T14:58:40.504Z] 8535.54 IOPS, 33.34 MiB/s [2024-11-22T14:58:40.504Z] 8564.10 IOPS, 33.45 MiB/s [2024-11-22T14:58:40.504Z] 8591.25 IOPS, 33.56 MiB/s [2024-11-22T14:58:40.504Z] 8614.51 IOPS, 33.65 MiB/s [2024-11-22T14:58:40.504Z] 8631.57 IOPS, 33.72 MiB/s [2024-11-22T14:58:40.504Z] 8650.09 IOPS, 33.79 MiB/s [2024-11-22T14:58:40.504Z] Received shutdown signal, test time was about 55.379871 seconds 00:20:25.839 00:20:25.839 Latency(us) 00:20:25.839 [2024-11-22T14:58:40.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.839 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.839 Verification LBA range: start 0x0 length 0x4000 00:20:25.839 Nvme0n1 : 55.38 8656.99 33.82 0.00 0.00 14755.63 1310.72 7015926.69 00:20:25.839 [2024-11-22T14:58:40.504Z] =================================================================================================================== 00:20:25.839 [2024-11-22T14:58:40.504Z] Total : 8656.99 33.82 0.00 0.00 14755.63 1310.72 7015926.69 00:20:25.839 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.101 rmmod nvme_tcp 00:20:26.101 rmmod nvme_fabrics 00:20:26.101 rmmod nvme_keyring 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81010 ']' 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81010 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81010 ']' 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81010 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81010 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81010' 00:20:26.101 killing process with pid 81010 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81010 00:20:26.101 14:58:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81010 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.669 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:26.669 00:20:26.669 real 1m0.452s 00:20:26.670 user 2m44.464s 00:20:26.670 sys 0m20.437s 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.670 ************************************ 00:20:26.670 END TEST nvmf_host_multipath 00:20:26.670 ************************************ 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.670 14:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.929 ************************************ 00:20:26.929 START TEST nvmf_timeout 00:20:26.929 ************************************ 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:26.929 * Looking for test storage... 00:20:26.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.929 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:26.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.929 --rc genhtml_branch_coverage=1 00:20:26.929 --rc genhtml_function_coverage=1 00:20:26.929 --rc genhtml_legend=1 00:20:26.929 --rc geninfo_all_blocks=1 00:20:26.930 --rc geninfo_unexecuted_blocks=1 00:20:26.930 00:20:26.930 ' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:26.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.930 --rc genhtml_branch_coverage=1 00:20:26.930 --rc genhtml_function_coverage=1 00:20:26.930 --rc genhtml_legend=1 00:20:26.930 --rc geninfo_all_blocks=1 00:20:26.930 --rc geninfo_unexecuted_blocks=1 00:20:26.930 00:20:26.930 ' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:26.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.930 --rc genhtml_branch_coverage=1 00:20:26.930 --rc genhtml_function_coverage=1 00:20:26.930 --rc genhtml_legend=1 00:20:26.930 --rc geninfo_all_blocks=1 00:20:26.930 --rc geninfo_unexecuted_blocks=1 00:20:26.930 00:20:26.930 ' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:26.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.930 --rc genhtml_branch_coverage=1 00:20:26.930 --rc genhtml_function_coverage=1 00:20:26.930 --rc genhtml_legend=1 00:20:26.930 --rc geninfo_all_blocks=1 00:20:26.930 --rc geninfo_unexecuted_blocks=1 00:20:26.930 00:20:26.930 ' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:26.930 Cannot find device "nvmf_init_br" 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:26.930 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:27.190 Cannot find device "nvmf_init_br2" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:27.190 Cannot find device "nvmf_tgt_br" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.190 Cannot find device "nvmf_tgt_br2" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:27.190 Cannot find device "nvmf_init_br" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:27.190 Cannot find device "nvmf_init_br2" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:27.190 Cannot find device "nvmf_tgt_br" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:27.190 Cannot find device "nvmf_tgt_br2" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:27.190 Cannot find device "nvmf_br" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:27.190 Cannot find device "nvmf_init_if" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:27.190 Cannot find device "nvmf_init_if2" 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:27.190 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:27.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:27.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:20:27.450 00:20:27.450 --- 10.0.0.3 ping statistics --- 00:20:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.450 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:27.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:27.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:20:27.450 00:20:27.450 --- 10.0.0.4 ping statistics --- 00:20:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.450 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:27.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:27.450 00:20:27.450 --- 10.0.0.1 ping statistics --- 00:20:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.450 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:27.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:27.450 00:20:27.450 --- 10.0.0.2 ping statistics --- 00:20:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.450 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82220 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82220 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82220 ']' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.450 14:58:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.451 [2024-11-22 14:58:42.037168] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:20:27.451 [2024-11-22 14:58:42.037261] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.709 [2024-11-22 14:58:42.192265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:27.709 [2024-11-22 14:58:42.252329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.709 [2024-11-22 14:58:42.252427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.709 [2024-11-22 14:58:42.252444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.709 [2024-11-22 14:58:42.252455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.709 [2024-11-22 14:58:42.252464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.709 [2024-11-22 14:58:42.254026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.709 [2024-11-22 14:58:42.254049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.709 [2024-11-22 14:58:42.337071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.967 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:28.226 [2024-11-22 14:58:42.753622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.226 14:58:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:28.484 Malloc0 00:20:28.484 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.743 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.001 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.260 [2024-11-22 14:58:43.818107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82263 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82263 /var/tmp/bdevperf.sock 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82263 ']' 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.260 14:58:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:29.260 [2024-11-22 14:58:43.894082] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:20:29.260 [2024-11-22 14:58:43.894198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82263 ] 00:20:29.519 [2024-11-22 14:58:44.040541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.519 [2024-11-22 14:58:44.084721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.519 [2024-11-22 14:58:44.138032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:29.778 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.778 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:29.778 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:30.037 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:30.296 NVMe0n1 00:20:30.296 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82279 00:20:30.296 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.296 14:58:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:30.296 Running I/O for 10 seconds... 00:20:31.232 14:58:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:31.493 8329.00 IOPS, 32.54 MiB/s [2024-11-22T14:58:46.158Z] [2024-11-22 14:58:46.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.493 [2024-11-22 14:58:46.049958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.493 [2024-11-22 14:58:46.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.493 [2024-11-22 14:58:46.050008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.493 [2024-11-22 14:58:46.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.493 [2024-11-22 14:58:46.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.493 [2024-11-22 14:58:46.050037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.493 [2024-11-22 14:58:46.050045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.493 [2024-11-22 14:58:46.050055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.493 [2024-11-22 14:58:46.050063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.494 [2024-11-22 14:58:46.050812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.494 [2024-11-22 14:58:46.050946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.494 [2024-11-22 14:58:46.050955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.050965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.050974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.050984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.050993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.495 [2024-11-22 14:58:46.051580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.495 [2024-11-22 14:58:46.051727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.495 [2024-11-22 14:58:46.051736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.051990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.051999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:31.496 [2024-11-22 14:58:46.052228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.496 [2024-11-22 14:58:46.052545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.496 [2024-11-22 14:58:46.052555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6270 is same with the state(6) to be set 00:20:31.496 [2024-11-22 14:58:46.052566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:31.497 [2024-11-22 14:58:46.052579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:31.497 [2024-11-22 14:58:46.052587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:20:31.497 [2024-11-22 14:58:46.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:31.497 [2024-11-22 14:58:46.052917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:31.497 [2024-11-22 14:58:46.053011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488e50 (9): Bad file descriptor 00:20:31.497 [2024-11-22 14:58:46.053134] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:31.497 [2024-11-22 14:58:46.053158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1488e50 with addr=10.0.0.3, port=4420 00:20:31.497 [2024-11-22 14:58:46.053169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488e50 is same with the state(6) to be set 00:20:31.497 [2024-11-22 14:58:46.053187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488e50 (9): Bad file descriptor 00:20:31.497 [2024-11-22 14:58:46.053204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:31.497 [2024-11-22 14:58:46.053213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:31.497 [2024-11-22 14:58:46.053224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:31.497 [2024-11-22 14:58:46.053234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:31.497 [2024-11-22 14:58:46.053245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:31.497 14:58:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:33.368 4932.50 IOPS, 19.27 MiB/s [2024-11-22T14:58:48.292Z] 3288.33 IOPS, 12.85 MiB/s [2024-11-22T14:58:48.292Z] [2024-11-22 14:58:48.053351] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.627 [2024-11-22 14:58:48.053446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1488e50 with addr=10.0.0.3, port=4420 00:20:33.627 [2024-11-22 14:58:48.053463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488e50 is same with the state(6) to be set 00:20:33.627 [2024-11-22 14:58:48.053487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488e50 (9): Bad file descriptor 00:20:33.627 [2024-11-22 14:58:48.053506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:33.627 [2024-11-22 14:58:48.053515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:33.627 [2024-11-22 14:58:48.053526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:33.627 [2024-11-22 14:58:48.053537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:33.627 [2024-11-22 14:58:48.053547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:33.627 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:33.627 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:33.627 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:33.886 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:33.886 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:33.886 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:33.886 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:34.144 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:34.144 14:58:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:35.356 2466.25 IOPS, 9.63 MiB/s [2024-11-22T14:58:50.278Z] 1973.00 IOPS, 7.71 MiB/s [2024-11-22T14:58:50.278Z] [2024-11-22 14:58:50.053775] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:35.613 [2024-11-22 14:58:50.053876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1488e50 with addr=10.0.0.3, port=4420 00:20:35.613 [2024-11-22 14:58:50.053893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488e50 is same with the state(6) to be set 00:20:35.613 [2024-11-22 14:58:50.053919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488e50 (9): Bad file descriptor 00:20:35.613 [2024-11-22 14:58:50.053939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:35.613 [2024-11-22 14:58:50.053949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:35.614 [2024-11-22 14:58:50.053960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:35.614 [2024-11-22 14:58:50.053971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:35.614 [2024-11-22 14:58:50.053982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:37.485 1644.17 IOPS, 6.42 MiB/s [2024-11-22T14:58:52.150Z] 1409.29 IOPS, 5.51 MiB/s [2024-11-22T14:58:52.150Z] [2024-11-22 14:58:52.054022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:37.485 [2024-11-22 14:58:52.054081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:37.485 [2024-11-22 14:58:52.054108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:37.485 [2024-11-22 14:58:52.054117] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:37.485 [2024-11-22 14:58:52.054128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:38.421 1233.12 IOPS, 4.82 MiB/s 00:20:38.421 Latency(us) 00:20:38.421 [2024-11-22T14:58:53.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.421 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.421 Verification LBA range: start 0x0 length 0x4000 00:20:38.421 NVMe0n1 : 8.19 1204.89 4.71 15.63 0.00 104715.01 2681.02 7015926.69 00:20:38.421 [2024-11-22T14:58:53.086Z] =================================================================================================================== 00:20:38.421 [2024-11-22T14:58:53.086Z] Total : 1204.89 4.71 15.63 0.00 104715.01 2681.02 7015926.69 00:20:38.421 { 00:20:38.421 "results": [ 00:20:38.421 { 00:20:38.421 "job": "NVMe0n1", 00:20:38.421 "core_mask": "0x4", 00:20:38.421 "workload": "verify", 00:20:38.421 "status": "finished", 00:20:38.421 "verify_range": { 00:20:38.421 "start": 0, 00:20:38.421 "length": 16384 00:20:38.421 }, 00:20:38.421 "queue_depth": 128, 00:20:38.421 "io_size": 4096, 00:20:38.421 "runtime": 8.187456, 00:20:38.421 "iops": 1204.891971327846, 00:20:38.421 "mibps": 4.7066092629993985, 00:20:38.421 "io_failed": 128, 00:20:38.421 "io_timeout": 0, 00:20:38.421 "avg_latency_us": 104715.0137547192, 00:20:38.421 "min_latency_us": 2681.018181818182, 00:20:38.421 "max_latency_us": 7015926.69090909 00:20:38.421 } 00:20:38.421 ], 00:20:38.421 "core_count": 1 00:20:38.421 } 00:20:38.988 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:38.988 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.988 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:39.246 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:39.246 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:39.246 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:39.246 14:58:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82279 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82263 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82263 ']' 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82263 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82263 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.506 killing process with pid 82263 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82263' 00:20:39.506 Received shutdown signal, test time was about 9.265892 seconds 00:20:39.506 00:20:39.506 Latency(us) 00:20:39.506 [2024-11-22T14:58:54.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.506 [2024-11-22T14:58:54.171Z] =================================================================================================================== 00:20:39.506 [2024-11-22T14:58:54.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82263 00:20:39.506 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82263 00:20:39.765 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:40.024 [2024-11-22 14:58:54.505802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82396 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82396 /var/tmp/bdevperf.sock 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82396 ']' 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.024 14:58:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:40.024 [2024-11-22 14:58:54.574613] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:20:40.024 [2024-11-22 14:58:54.574705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82396 ] 00:20:40.282 [2024-11-22 14:58:54.715996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.282 [2024-11-22 14:58:54.763165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.282 [2024-11-22 14:58:54.815417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.216 14:58:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.216 14:58:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:41.216 14:58:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:41.216 14:58:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:41.474 NVMe0n1 00:20:41.474 14:58:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82420 00:20:41.474 14:58:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.474 14:58:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:41.732 Running I/O for 10 seconds... 00:20:42.669 14:58:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.938 9779.00 IOPS, 38.20 MiB/s [2024-11-22T14:58:57.603Z] [2024-11-22 14:58:57.352566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.938 [2024-11-22 14:58:57.352625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.938 [2024-11-22 14:58:57.352646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.938 [2024-11-22 14:58:57.352656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.939 [2024-11-22 14:58:57.352675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.939 [2024-11-22 14:58:57.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.939 [2024-11-22 14:58:57.352829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.939 [2024-11-22 14:58:57.352837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.940 [2024-11-22 14:58:57.352977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.940 [2024-11-22 14:58:57.352986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.352995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.941 [2024-11-22 14:58:57.353003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.353012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.941 [2024-11-22 14:58:57.353020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.353030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.941 [2024-11-22 14:58:57.353038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.353047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.941 [2024-11-22 14:58:57.353055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.353064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.941 [2024-11-22 14:58:57.353071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.941 [2024-11-22 14:58:57.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.943 [2024-11-22 14:58:57.353190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.943 [2024-11-22 14:58:57.353199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.944 [2024-11-22 14:58:57.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.944 [2024-11-22 14:58:57.353226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.944 [2024-11-22 14:58:57.353366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.944 [2024-11-22 14:58:57.353382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.944 [2024-11-22 14:58:57.353431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.944 [2024-11-22 14:58:57.353440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.944 [2024-11-22 14:58:57.353448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.945 [2024-11-22 14:58:57.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.945 [2024-11-22 14:58:57.353663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.945 [2024-11-22 14:58:57.353681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.945 [2024-11-22 14:58:57.353698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.945 [2024-11-22 14:58:57.353716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.945 [2024-11-22 14:58:57.353742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.945 [2024-11-22 14:58:57.353749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.946 [2024-11-22 14:58:57.353863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.946 [2024-11-22 14:58:57.353871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.947 [2024-11-22 14:58:57.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.947 [2024-11-22 14:58:57.354007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.948 [2024-11-22 14:58:57.354017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.948 [2024-11-22 14:58:57.354025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.948 [2024-11-22 14:58:57.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.948 [2024-11-22 14:58:57.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.948 [2024-11-22 14:58:57.354051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.948 [2024-11-22 14:58:57.354059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.948 [2024-11-22 14:58:57.354068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.948 [2024-11-22 14:58:57.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.948 [2024-11-22 14:58:57.354088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.948 [2024-11-22 14:58:57.354096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.949 [2024-11-22 14:58:57.354234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.949 [2024-11-22 14:58:57.354253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.949 [2024-11-22 14:58:57.354270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.949 [2024-11-22 14:58:57.354280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.949 [2024-11-22 14:58:57.354289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.950 [2024-11-22 14:58:57.354439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.950 [2024-11-22 14:58:57.354449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.951 [2024-11-22 14:58:57.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.951 [2024-11-22 14:58:57.354476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.951 [2024-11-22 14:58:57.354495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.951 [2024-11-22 14:58:57.354512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.951 [2024-11-22 14:58:57.354530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.951 [2024-11-22 14:58:57.354648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.951 [2024-11-22 14:58:57.354658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.952 [2024-11-22 14:58:57.354666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:42.952 [2024-11-22 14:58:57.354683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.952 [2024-11-22 14:58:57.354702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.952 [2024-11-22 14:58:57.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.952 [2024-11-22 14:58:57.354739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.952 [2024-11-22 14:58:57.354757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.952 [2024-11-22 14:58:57.354766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.953 [2024-11-22 14:58:57.354918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.953 [2024-11-22 14:58:57.354928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.954 [2024-11-22 14:58:57.354936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.954 [2024-11-22 14:58:57.354946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.954 [2024-11-22 14:58:57.354954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.954 [2024-11-22 14:58:57.354963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965270 is same with the state(6) to be set 00:20:42.954 [2024-11-22 14:58:57.354974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:42.954 [2024-11-22 14:58:57.354981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:42.954 [2024-11-22 14:58:57.354989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90640 len:8 PRP1 0x0 PRP2 0x0 00:20:42.954 [2024-11-22 14:58:57.354997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.954 [2024-11-22 14:58:57.355273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:42.954 [2024-11-22 14:58:57.355349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:42.954 [2024-11-22 14:58:57.355461] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.954 [2024-11-22 14:58:57.355482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:42.954 [2024-11-22 14:58:57.355521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:42.954 [2024-11-22 14:58:57.355539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:42.954 [2024-11-22 14:58:57.355555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:42.954 [2024-11-22 14:58:57.355565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:42.954 [2024-11-22 14:58:57.355582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:42.954 [2024-11-22 14:58:57.355593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:42.955 [2024-11-22 14:58:57.355603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:42.955 14:58:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:43.893 5625.50 IOPS, 21.97 MiB/s [2024-11-22T14:58:58.558Z] [2024-11-22 14:58:58.355712] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.893 [2024-11-22 14:58:58.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:43.893 [2024-11-22 14:58:58.355823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:43.893 [2024-11-22 14:58:58.355845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:43.893 [2024-11-22 14:58:58.355869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:43.893 [2024-11-22 14:58:58.355893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:43.893 [2024-11-22 14:58:58.355903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:43.893 [2024-11-22 14:58:58.355913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:43.893 [2024-11-22 14:58:58.355923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:43.893 14:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:44.152 [2024-11-22 14:58:58.631332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:44.152 14:58:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82420 00:20:44.721 3750.33 IOPS, 14.65 MiB/s [2024-11-22T14:58:59.386Z] [2024-11-22 14:58:59.374043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:46.594 2812.75 IOPS, 10.99 MiB/s [2024-11-22T14:59:02.636Z] 4289.60 IOPS, 16.76 MiB/s [2024-11-22T14:59:03.573Z] 5537.33 IOPS, 21.63 MiB/s [2024-11-22T14:59:04.510Z] 6432.57 IOPS, 25.13 MiB/s [2024-11-22T14:59:05.445Z] 7092.50 IOPS, 27.71 MiB/s [2024-11-22T14:59:06.382Z] 7609.33 IOPS, 29.72 MiB/s [2024-11-22T14:59:06.382Z] 8026.00 IOPS, 31.35 MiB/s 00:20:51.717 Latency(us) 00:20:51.717 [2024-11-22T14:59:06.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.717 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.717 Verification LBA range: start 0x0 length 0x4000 00:20:51.717 NVMe0n1 : 10.01 8031.10 31.37 0.00 0.00 15911.03 1042.62 3019898.88 00:20:51.717 [2024-11-22T14:59:06.382Z] =================================================================================================================== 00:20:51.717 [2024-11-22T14:59:06.382Z] Total : 8031.10 31.37 0.00 0.00 15911.03 1042.62 3019898.88 00:20:51.717 { 00:20:51.717 "results": [ 00:20:51.717 { 00:20:51.717 "job": "NVMe0n1", 00:20:51.717 "core_mask": "0x4", 00:20:51.717 "workload": "verify", 00:20:51.717 "status": "finished", 00:20:51.717 "verify_range": { 00:20:51.717 "start": 0, 00:20:51.717 "length": 16384 00:20:51.717 }, 00:20:51.717 "queue_depth": 128, 00:20:51.717 "io_size": 4096, 00:20:51.717 "runtime": 10.007597, 00:20:51.717 "iops": 8031.098774261194, 00:20:51.717 "mibps": 31.37147958695779, 00:20:51.717 "io_failed": 0, 00:20:51.717 "io_timeout": 0, 00:20:51.717 "avg_latency_us": 15911.032300529809, 00:20:51.717 "min_latency_us": 1042.6181818181817, 00:20:51.717 "max_latency_us": 3019898.88 00:20:51.717 } 00:20:51.717 ], 00:20:51.717 "core_count": 1 00:20:51.717 } 00:20:51.717 14:59:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82530 00:20:51.717 14:59:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:51.717 14:59:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.976 Running I/O for 10 seconds... 00:20:52.914 14:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:52.914 9920.00 IOPS, 38.75 MiB/s [2024-11-22T14:59:07.579Z] [2024-11-22 14:59:07.506785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.914 [2024-11-22 14:59:07.506980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.506989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.506997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.914 [2024-11-22 14:59:07.507400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.914 [2024-11-22 14:59:07.507408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.507683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.507990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.507998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.915 [2024-11-22 14:59:07.508191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.915 [2024-11-22 14:59:07.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-11-22 14:59:07.508323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.508724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.508991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.508999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.509016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.509050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.916 [2024-11-22 14:59:07.509069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.509097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.509115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.509133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-11-22 14:59:07.509143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.916 [2024-11-22 14:59:07.509151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.917 [2024-11-22 14:59:07.509370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.917 [2024-11-22 14:59:07.509533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966330 is same with the state(6) to be set 00:20:52.917 [2024-11-22 14:59:07.509553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:52.917 [2024-11-22 14:59:07.509561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:52.917 [2024-11-22 14:59:07.509569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91960 len:8 PRP1 0x0 PRP2 0x0 00:20:52.917 [2024-11-22 14:59:07.509578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.917 [2024-11-22 14:59:07.509838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:52.917 [2024-11-22 14:59:07.509914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:52.917 [2024-11-22 14:59:07.510005] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.917 [2024-11-22 14:59:07.510025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:52.917 [2024-11-22 14:59:07.510035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:52.917 [2024-11-22 14:59:07.510051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:52.917 [2024-11-22 14:59:07.510066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:52.917 [2024-11-22 14:59:07.510075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:52.917 [2024-11-22 14:59:07.510085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:52.917 [2024-11-22 14:59:07.510095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:52.917 [2024-11-22 14:59:07.510105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:52.917 14:59:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:53.875 5684.00 IOPS, 22.20 MiB/s [2024-11-22T14:59:08.540Z] [2024-11-22 14:59:08.510227] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.875 [2024-11-22 14:59:08.510314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:53.875 [2024-11-22 14:59:08.510331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:53.875 [2024-11-22 14:59:08.510354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:53.875 [2024-11-22 14:59:08.510374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:53.875 [2024-11-22 14:59:08.510397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:53.875 [2024-11-22 14:59:08.510410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:53.875 [2024-11-22 14:59:08.510422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:53.875 [2024-11-22 14:59:08.510433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:55.067 3789.33 IOPS, 14.80 MiB/s [2024-11-22T14:59:09.732Z] [2024-11-22 14:59:09.510556] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.067 [2024-11-22 14:59:09.510620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:55.067 [2024-11-22 14:59:09.510635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:55.067 [2024-11-22 14:59:09.510657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:55.067 [2024-11-22 14:59:09.510675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:55.067 [2024-11-22 14:59:09.510685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:55.067 [2024-11-22 14:59:09.510696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:55.067 [2024-11-22 14:59:09.510707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:55.067 [2024-11-22 14:59:09.510717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:56.002 2842.00 IOPS, 11.10 MiB/s [2024-11-22T14:59:10.667Z] [2024-11-22 14:59:10.513551] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.002 [2024-11-22 14:59:10.513607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f7e50 with addr=10.0.0.3, port=4420 00:20:56.002 [2024-11-22 14:59:10.513620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f7e50 is same with the state(6) to be set 00:20:56.002 [2024-11-22 14:59:10.513858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f7e50 (9): Bad file descriptor 00:20:56.002 [2024-11-22 14:59:10.514072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:56.002 [2024-11-22 14:59:10.514084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:56.002 [2024-11-22 14:59:10.514092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:56.002 [2024-11-22 14:59:10.514101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:56.002 [2024-11-22 14:59:10.514110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:56.002 14:59:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:56.261 [2024-11-22 14:59:10.766628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.261 14:59:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82530 00:20:57.087 2273.60 IOPS, 8.88 MiB/s [2024-11-22T14:59:11.752Z] [2024-11-22 14:59:11.545022] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:58.956 3505.17 IOPS, 13.69 MiB/s [2024-11-22T14:59:14.557Z] 4674.14 IOPS, 18.26 MiB/s [2024-11-22T14:59:15.493Z] 5568.88 IOPS, 21.75 MiB/s [2024-11-22T14:59:16.430Z] 6276.33 IOPS, 24.52 MiB/s [2024-11-22T14:59:16.430Z] 6835.90 IOPS, 26.70 MiB/s 00:21:01.765 Latency(us) 00:21:01.765 [2024-11-22T14:59:16.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.765 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.765 Verification LBA range: start 0x0 length 0x4000 00:21:01.765 NVMe0n1 : 10.01 6842.59 26.73 4603.43 0.00 11162.29 629.29 3019898.88 00:21:01.765 [2024-11-22T14:59:16.430Z] =================================================================================================================== 00:21:01.765 [2024-11-22T14:59:16.430Z] Total : 6842.59 26.73 4603.43 0.00 11162.29 0.00 3019898.88 00:21:01.765 { 00:21:01.765 "results": [ 00:21:01.765 { 00:21:01.765 "job": "NVMe0n1", 00:21:01.765 "core_mask": "0x4", 00:21:01.765 "workload": "verify", 00:21:01.765 "status": "finished", 00:21:01.765 "verify_range": { 00:21:01.765 "start": 0, 00:21:01.765 "length": 16384 00:21:01.765 }, 00:21:01.765 "queue_depth": 128, 00:21:01.765 "io_size": 4096, 00:21:01.765 "runtime": 10.007765, 00:21:01.765 "iops": 6842.586731403066, 00:21:01.765 "mibps": 26.728854419543225, 00:21:01.765 "io_failed": 46070, 00:21:01.765 "io_timeout": 0, 00:21:01.765 "avg_latency_us": 11162.289423232138, 00:21:01.765 "min_latency_us": 629.2945454545454, 00:21:01.765 "max_latency_us": 3019898.88 00:21:01.765 } 00:21:01.765 ], 00:21:01.765 "core_count": 1 00:21:01.765 } 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82396 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82396 ']' 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82396 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.765 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82396 00:21:02.024 killing process with pid 82396 00:21:02.024 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.024 00:21:02.024 Latency(us) 00:21:02.024 [2024-11-22T14:59:16.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.024 [2024-11-22T14:59:16.689Z] =================================================================================================================== 00:21:02.024 [2024-11-22T14:59:16.689Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82396' 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82396 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82396 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82643 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82643 /var/tmp/bdevperf.sock 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82643 ']' 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.024 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.283 [2024-11-22 14:59:16.688639] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:21:02.283 [2024-11-22 14:59:16.688737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82643 ] 00:21:02.283 [2024-11-22 14:59:16.835560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.283 [2024-11-22 14:59:16.876964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.283 [2024-11-22 14:59:16.929327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:02.542 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.542 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:02.542 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82647 00:21:02.542 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:02.542 14:59:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82643 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:02.800 14:59:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:03.059 NVMe0n1 00:21:03.059 14:59:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82689 00:21:03.059 14:59:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.059 14:59:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:03.059 Running I/O for 10 seconds... 00:21:03.994 14:59:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:04.257 18798.00 IOPS, 73.43 MiB/s [2024-11-22T14:59:18.922Z] [2024-11-22 14:59:18.790361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.257 [2024-11-22 14:59:18.790599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.790993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.258 [2024-11-22 14:59:18.791101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1a80 is same with the state(6) to be set 00:21:04.259 [2024-11-22 14:59:18.791449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.259 [2024-11-22 14:59:18.791903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.259 [2024-11-22 14:59:18.791911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.791920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.791928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.791955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.791963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.791972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.791980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.791996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.260 [2024-11-22 14:59:18.792663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.260 [2024-11-22 14:59:18.792673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.792985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.792992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.261 [2024-11-22 14:59:18.793367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.261 [2024-11-22 14:59:18.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.262 [2024-11-22 14:59:18.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.793967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3130 is same with the state(6) to be set 00:21:04.262 [2024-11-22 14:59:18.793982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:04.262 [2024-11-22 14:59:18.793989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:04.262 [2024-11-22 14:59:18.793997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:21:04.262 [2024-11-22 14:59:18.794005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.794129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-22 14:59:18.794153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.794164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-22 14:59:18.794171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.794180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.262 [2024-11-22 14:59:18.794187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.262 [2024-11-22 14:59:18.794196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.263 [2024-11-22 14:59:18.794204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.263 [2024-11-22 14:59:18.794211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1355e50 is same with the state(6) to be set 00:21:04.263 [2024-11-22 14:59:18.794496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:04.263 [2024-11-22 14:59:18.794522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355e50 (9): Bad file descriptor 00:21:04.263 [2024-11-22 14:59:18.794606] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.263 [2024-11-22 14:59:18.794628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1355e50 with addr=10.0.0.3, port=4420 00:21:04.263 [2024-11-22 14:59:18.794639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1355e50 is same with the state(6) to be set 00:21:04.263 [2024-11-22 14:59:18.794655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355e50 (9): Bad file descriptor 00:21:04.263 [2024-11-22 14:59:18.794677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:04.263 [2024-11-22 14:59:18.794686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:04.263 [2024-11-22 14:59:18.794696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:04.263 [2024-11-22 14:59:18.794706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:04.263 [2024-11-22 14:59:18.794716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:04.263 14:59:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82689 00:21:06.135 10542.00 IOPS, 41.18 MiB/s [2024-11-22T14:59:21.059Z] 7028.00 IOPS, 27.45 MiB/s [2024-11-22T14:59:21.059Z] [2024-11-22 14:59:20.811413] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.394 [2024-11-22 14:59:20.811468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1355e50 with addr=10.0.0.3, port=4420 00:21:06.394 [2024-11-22 14:59:20.811481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1355e50 is same with the state(6) to be set 00:21:06.394 [2024-11-22 14:59:20.811523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355e50 (9): Bad file descriptor 00:21:06.394 [2024-11-22 14:59:20.811541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:06.394 [2024-11-22 14:59:20.811551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:06.394 [2024-11-22 14:59:20.811560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:06.394 [2024-11-22 14:59:20.811569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:06.394 [2024-11-22 14:59:20.811578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:08.267 5271.00 IOPS, 20.59 MiB/s [2024-11-22T14:59:22.932Z] 4216.80 IOPS, 16.47 MiB/s [2024-11-22T14:59:22.932Z] [2024-11-22 14:59:22.811681] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.267 [2024-11-22 14:59:22.811738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1355e50 with addr=10.0.0.3, port=4420 00:21:08.267 [2024-11-22 14:59:22.811752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1355e50 is same with the state(6) to be set 00:21:08.267 [2024-11-22 14:59:22.811771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355e50 (9): Bad file descriptor 00:21:08.267 [2024-11-22 14:59:22.811787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:08.267 [2024-11-22 14:59:22.811795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:08.267 [2024-11-22 14:59:22.811804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:08.267 [2024-11-22 14:59:22.811826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:08.267 [2024-11-22 14:59:22.811835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:10.142 3514.00 IOPS, 13.73 MiB/s [2024-11-22T14:59:25.065Z] 3012.00 IOPS, 11.77 MiB/s [2024-11-22T14:59:25.065Z] [2024-11-22 14:59:24.811890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:10.400 [2024-11-22 14:59:24.811934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:10.400 [2024-11-22 14:59:24.811959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:10.400 [2024-11-22 14:59:24.811967] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:10.400 [2024-11-22 14:59:24.811976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:11.337 2635.50 IOPS, 10.29 MiB/s 00:21:11.337 Latency(us) 00:21:11.337 [2024-11-22T14:59:26.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.337 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:11.337 NVMe0n1 : 8.13 2592.86 10.13 15.74 0.00 48999.89 1303.27 7046430.72 00:21:11.337 [2024-11-22T14:59:26.002Z] =================================================================================================================== 00:21:11.337 [2024-11-22T14:59:26.002Z] Total : 2592.86 10.13 15.74 0.00 48999.89 1303.27 7046430.72 00:21:11.337 { 00:21:11.337 "results": [ 00:21:11.337 { 00:21:11.337 "job": "NVMe0n1", 00:21:11.337 "core_mask": "0x4", 00:21:11.337 "workload": "randread", 00:21:11.337 "status": "finished", 00:21:11.337 "queue_depth": 128, 00:21:11.337 "io_size": 4096, 00:21:11.337 "runtime": 8.131551, 00:21:11.337 "iops": 2592.863280326226, 00:21:11.337 "mibps": 10.12837218877432, 00:21:11.337 "io_failed": 128, 00:21:11.337 "io_timeout": 0, 00:21:11.337 "avg_latency_us": 48999.890367373526, 00:21:11.337 "min_latency_us": 1303.2727272727273, 00:21:11.337 "max_latency_us": 7046430.72 00:21:11.337 } 00:21:11.337 ], 00:21:11.337 "core_count": 1 00:21:11.337 } 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:11.337 Attaching 5 probes... 00:21:11.337 1356.202757: reset bdev controller NVMe0 00:21:11.337 1356.269638: reconnect bdev controller NVMe0 00:21:11.337 3373.060672: reconnect delay bdev controller NVMe0 00:21:11.337 3373.074664: reconnect bdev controller NVMe0 00:21:11.337 5373.318364: reconnect delay bdev controller NVMe0 00:21:11.337 5373.332308: reconnect bdev controller NVMe0 00:21:11.337 7373.575841: reconnect delay bdev controller NVMe0 00:21:11.337 7373.589184: reconnect bdev controller NVMe0 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82647 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82643 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82643 ']' 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82643 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82643 00:21:11.337 killing process with pid 82643 00:21:11.337 Received shutdown signal, test time was about 8.200766 seconds 00:21:11.337 00:21:11.337 Latency(us) 00:21:11.337 [2024-11-22T14:59:26.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.337 [2024-11-22T14:59:26.002Z] =================================================================================================================== 00:21:11.337 [2024-11-22T14:59:26.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82643' 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82643 00:21:11.337 14:59:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82643 00:21:11.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:11.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:11.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.856 rmmod nvme_tcp 00:21:11.856 rmmod nvme_fabrics 00:21:11.856 rmmod nvme_keyring 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82220 ']' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82220 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82220 ']' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82220 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82220 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.856 killing process with pid 82220 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82220' 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82220 00:21:11.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82220 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:12.115 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:12.374 ************************************ 00:21:12.374 END TEST nvmf_timeout 00:21:12.374 ************************************ 00:21:12.374 00:21:12.374 real 0m45.557s 00:21:12.374 user 2m12.431s 00:21:12.374 sys 0m6.106s 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:12.374 00:21:12.374 real 5m2.528s 00:21:12.374 user 13m7.819s 00:21:12.374 sys 1m12.996s 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.374 14:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 ************************************ 00:21:12.374 END TEST nvmf_host 00:21:12.374 ************************************ 00:21:12.374 14:59:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:12.374 14:59:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:21:12.374 00:21:12.374 real 12m38.125s 00:21:12.374 user 30m15.872s 00:21:12.374 sys 3m17.860s 00:21:12.374 14:59:26 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.374 14:59:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.374 ************************************ 00:21:12.374 END TEST nvmf_tcp 00:21:12.374 ************************************ 00:21:12.374 14:59:27 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:21:12.374 14:59:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:12.374 14:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:12.374 14:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.374 14:59:27 -- common/autotest_common.sh@10 -- # set +x 00:21:12.633 ************************************ 00:21:12.633 START TEST nvmf_dif 00:21:12.633 ************************************ 00:21:12.633 14:59:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:21:12.633 * Looking for test storage... 00:21:12.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:12.633 14:59:27 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:12.633 14:59:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:21:12.633 14:59:27 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:12.633 14:59:27 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:12.633 14:59:27 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.633 14:59:27 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.633 14:59:27 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.633 14:59:27 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.634 --rc genhtml_branch_coverage=1 00:21:12.634 --rc genhtml_function_coverage=1 00:21:12.634 --rc genhtml_legend=1 00:21:12.634 --rc geninfo_all_blocks=1 00:21:12.634 --rc geninfo_unexecuted_blocks=1 00:21:12.634 00:21:12.634 ' 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.634 --rc genhtml_branch_coverage=1 00:21:12.634 --rc genhtml_function_coverage=1 00:21:12.634 --rc genhtml_legend=1 00:21:12.634 --rc geninfo_all_blocks=1 00:21:12.634 --rc geninfo_unexecuted_blocks=1 00:21:12.634 00:21:12.634 ' 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.634 --rc genhtml_branch_coverage=1 00:21:12.634 --rc genhtml_function_coverage=1 00:21:12.634 --rc genhtml_legend=1 00:21:12.634 --rc geninfo_all_blocks=1 00:21:12.634 --rc geninfo_unexecuted_blocks=1 00:21:12.634 00:21:12.634 ' 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:12.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.634 --rc genhtml_branch_coverage=1 00:21:12.634 --rc genhtml_function_coverage=1 00:21:12.634 --rc genhtml_legend=1 00:21:12.634 --rc geninfo_all_blocks=1 00:21:12.634 --rc geninfo_unexecuted_blocks=1 00:21:12.634 00:21:12.634 ' 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.634 14:59:27 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.634 14:59:27 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.634 14:59:27 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.634 14:59:27 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.634 14:59:27 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:21:12.634 14:59:27 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:21:12.634 14:59:27 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:12.634 14:59:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.634 14:59:27 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:12.893 Cannot find device "nvmf_init_br" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@162 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:12.893 Cannot find device "nvmf_init_br2" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@163 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:12.893 Cannot find device "nvmf_tgt_br" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@164 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.893 Cannot find device "nvmf_tgt_br2" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@165 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:12.893 Cannot find device "nvmf_init_br" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@166 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:12.893 Cannot find device "nvmf_init_br2" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@167 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:12.893 Cannot find device "nvmf_tgt_br" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@168 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:12.893 Cannot find device "nvmf_tgt_br2" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@169 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:12.893 Cannot find device "nvmf_br" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@170 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:12.893 Cannot find device "nvmf_init_if" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@171 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:12.893 Cannot find device "nvmf_init_if2" 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@172 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@173 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@174 -- # true 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:12.893 14:59:27 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:13.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:13.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:13.152 00:21:13.152 --- 10.0.0.3 ping statistics --- 00:21:13.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.152 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:13.152 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:13.152 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:21:13.152 00:21:13.152 --- 10.0.0.4 ping statistics --- 00:21:13.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.152 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:13.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:13.152 00:21:13.152 --- 10.0.0.1 ping statistics --- 00:21:13.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.152 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:13.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:13.152 00:21:13.152 --- 10.0.0.2 ping statistics --- 00:21:13.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.152 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:13.152 14:59:27 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:13.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.411 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:13.411 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.670 14:59:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:13.670 14:59:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83188 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83188 00:21:13.670 14:59:28 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83188 ']' 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.670 14:59:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:13.670 [2024-11-22 14:59:28.191825] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:21:13.670 [2024-11-22 14:59:28.191910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.927 [2024-11-22 14:59:28.343682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.927 [2024-11-22 14:59:28.406740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.927 [2024-11-22 14:59:28.406821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.927 [2024-11-22 14:59:28.406836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.927 [2024-11-22 14:59:28.406847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.928 [2024-11-22 14:59:28.406857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.928 [2024-11-22 14:59:28.407346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.928 [2024-11-22 14:59:28.491023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.928 14:59:28 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.928 14:59:28 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:21:13.928 14:59:28 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.928 14:59:28 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.928 14:59:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:14.186 14:59:28 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.186 14:59:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:21:14.186 14:59:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:14.186 [2024-11-22 14:59:28.627479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.186 14:59:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.186 14:59:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:14.186 ************************************ 00:21:14.186 START TEST fio_dif_1_default 00:21:14.186 ************************************ 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:14.186 bdev_null0 00:21:14.186 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:14.187 [2024-11-22 14:59:28.675812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:14.187 { 00:21:14.187 "params": { 00:21:14.187 "name": "Nvme$subsystem", 00:21:14.187 "trtype": "$TEST_TRANSPORT", 00:21:14.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.187 "adrfam": "ipv4", 00:21:14.187 "trsvcid": "$NVMF_PORT", 00:21:14.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.187 "hdgst": ${hdgst:-false}, 00:21:14.187 "ddgst": ${ddgst:-false} 00:21:14.187 }, 00:21:14.187 "method": "bdev_nvme_attach_controller" 00:21:14.187 } 00:21:14.187 EOF 00:21:14.187 )") 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:14.187 "params": { 00:21:14.187 "name": "Nvme0", 00:21:14.187 "trtype": "tcp", 00:21:14.187 "traddr": "10.0.0.3", 00:21:14.187 "adrfam": "ipv4", 00:21:14.187 "trsvcid": "4420", 00:21:14.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.187 "hdgst": false, 00:21:14.187 "ddgst": false 00:21:14.187 }, 00:21:14.187 "method": "bdev_nvme_attach_controller" 00:21:14.187 }' 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.187 14:59:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:14.446 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:14.446 fio-3.35 00:21:14.446 Starting 1 thread 00:21:26.664 00:21:26.664 filename0: (groupid=0, jobs=1): err= 0: pid=83249: Fri Nov 22 14:59:39 2024 00:21:26.664 read: IOPS=10.8k, BW=42.2MiB/s (44.2MB/s)(422MiB/10001msec) 00:21:26.664 slat (usec): min=5, max=140, avg= 7.18, stdev= 2.45 00:21:26.664 clat (usec): min=312, max=1802, avg=348.81, stdev=23.62 00:21:26.664 lat (usec): min=318, max=1811, avg=355.99, stdev=24.51 00:21:26.664 clat percentiles (usec): 00:21:26.664 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 326], 20.00th=[ 334], 00:21:26.664 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 343], 60.00th=[ 347], 00:21:26.664 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 392], 00:21:26.664 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 486], 99.95th=[ 510], 00:21:26.664 | 99.99th=[ 553] 00:21:26.665 bw ( KiB/s): min=40288, max=43872, per=100.00%, avg=43191.58, stdev=876.35, samples=19 00:21:26.665 iops : min=10072, max=10968, avg=10797.89, stdev=219.09, samples=19 00:21:26.665 lat (usec) : 500=99.94%, 750=0.06% 00:21:26.665 lat (msec) : 2=0.01% 00:21:26.665 cpu : usr=84.30%, sys=13.72%, ctx=15, majf=0, minf=9 00:21:26.665 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:26.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.665 issued rwts: total=107988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.665 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:26.665 00:21:26.665 Run status group 0 (all jobs): 00:21:26.665 READ: bw=42.2MiB/s (44.2MB/s), 42.2MiB/s-42.2MiB/s (44.2MB/s-44.2MB/s), io=422MiB (442MB), run=10001-10001msec 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 00:21:26.665 real 0m11.074s 00:21:26.665 user 0m9.129s 00:21:26.665 sys 0m1.657s 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 ************************************ 00:21:26.665 END TEST fio_dif_1_default 00:21:26.665 ************************************ 00:21:26.665 14:59:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:26.665 14:59:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.665 14:59:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 ************************************ 00:21:26.665 START TEST fio_dif_1_multi_subsystems 00:21:26.665 ************************************ 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 bdev_null0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 [2024-11-22 14:59:39.804846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 bdev_null1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.665 { 00:21:26.665 "params": { 00:21:26.665 "name": "Nvme$subsystem", 00:21:26.665 "trtype": "$TEST_TRANSPORT", 00:21:26.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.665 "adrfam": "ipv4", 00:21:26.665 "trsvcid": "$NVMF_PORT", 00:21:26.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.665 "hdgst": ${hdgst:-false}, 00:21:26.665 "ddgst": ${ddgst:-false} 00:21:26.665 }, 00:21:26.665 "method": "bdev_nvme_attach_controller" 00:21:26.665 } 00:21:26.665 EOF 00:21:26.665 )") 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.665 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.665 { 00:21:26.665 "params": { 00:21:26.666 "name": "Nvme$subsystem", 00:21:26.666 "trtype": "$TEST_TRANSPORT", 00:21:26.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.666 "adrfam": "ipv4", 00:21:26.666 "trsvcid": "$NVMF_PORT", 00:21:26.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.666 "hdgst": ${hdgst:-false}, 00:21:26.666 "ddgst": ${ddgst:-false} 00:21:26.666 }, 00:21:26.666 "method": "bdev_nvme_attach_controller" 00:21:26.666 } 00:21:26.666 EOF 00:21:26.666 )") 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:26.666 "params": { 00:21:26.666 "name": "Nvme0", 00:21:26.666 "trtype": "tcp", 00:21:26.666 "traddr": "10.0.0.3", 00:21:26.666 "adrfam": "ipv4", 00:21:26.666 "trsvcid": "4420", 00:21:26.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.666 "hdgst": false, 00:21:26.666 "ddgst": false 00:21:26.666 }, 00:21:26.666 "method": "bdev_nvme_attach_controller" 00:21:26.666 },{ 00:21:26.666 "params": { 00:21:26.666 "name": "Nvme1", 00:21:26.666 "trtype": "tcp", 00:21:26.666 "traddr": "10.0.0.3", 00:21:26.666 "adrfam": "ipv4", 00:21:26.666 "trsvcid": "4420", 00:21:26.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.666 "hdgst": false, 00:21:26.666 "ddgst": false 00:21:26.666 }, 00:21:26.666 "method": "bdev_nvme_attach_controller" 00:21:26.666 }' 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:26.666 14:59:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:26.666 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:26.666 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:26.666 fio-3.35 00:21:26.666 Starting 2 threads 00:21:36.676 00:21:36.676 filename0: (groupid=0, jobs=1): err= 0: pid=83409: Fri Nov 22 14:59:50 2024 00:21:36.676 read: IOPS=5728, BW=22.4MiB/s (23.5MB/s)(224MiB/10001msec) 00:21:36.676 slat (nsec): min=5819, max=82662, avg=14698.37, stdev=6834.57 00:21:36.676 clat (usec): min=332, max=1670, avg=658.36, stdev=41.94 00:21:36.676 lat (usec): min=339, max=1722, avg=673.06, stdev=45.23 00:21:36.676 clat percentiles (usec): 00:21:36.676 | 1.00th=[ 594], 5.00th=[ 603], 10.00th=[ 611], 20.00th=[ 627], 00:21:36.676 | 30.00th=[ 635], 40.00th=[ 644], 50.00th=[ 652], 60.00th=[ 660], 00:21:36.676 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 734], 00:21:36.676 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 848], 00:21:36.676 | 99.99th=[ 971] 00:21:36.676 bw ( KiB/s): min=21344, max=24000, per=50.23%, avg=23006.32, stdev=1106.94, samples=19 00:21:36.676 iops : min= 5336, max= 6000, avg=5751.58, stdev=276.74, samples=19 00:21:36.676 lat (usec) : 500=0.13%, 750=97.28%, 1000=2.59% 00:21:36.676 lat (msec) : 2=0.01% 00:21:36.676 cpu : usr=90.39%, sys=8.29%, ctx=13, majf=0, minf=0 00:21:36.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.676 issued rwts: total=57288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.676 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:36.676 filename1: (groupid=0, jobs=1): err= 0: pid=83410: Fri Nov 22 14:59:50 2024 00:21:36.676 read: IOPS=5721, BW=22.3MiB/s (23.4MB/s)(224MiB/10001msec) 00:21:36.676 slat (nsec): min=5854, max=86101, avg=14641.44, stdev=6522.63 00:21:36.676 clat (usec): min=517, max=7754, avg=660.01, stdev=74.87 00:21:36.676 lat (usec): min=524, max=7764, avg=674.65, stdev=77.01 00:21:36.676 clat percentiles (usec): 00:21:36.676 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:21:36.676 | 30.00th=[ 635], 40.00th=[ 644], 50.00th=[ 652], 60.00th=[ 668], 00:21:36.676 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 742], 00:21:36.676 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 840], 99.95th=[ 857], 00:21:36.676 | 99.99th=[ 1680] 00:21:36.676 bw ( KiB/s): min=21344, max=24000, per=50.17%, avg=22976.00, stdev=1116.59, samples=19 00:21:36.676 iops : min= 5336, max= 6000, avg=5744.00, stdev=279.15, samples=19 00:21:36.676 lat (usec) : 750=96.96%, 1000=3.03% 00:21:36.676 lat (msec) : 2=0.01%, 10=0.01% 00:21:36.676 cpu : usr=90.73%, sys=7.79%, ctx=30, majf=0, minf=0 00:21:36.676 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:36.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.676 issued rwts: total=57216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.676 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:36.676 00:21:36.676 Run status group 0 (all jobs): 00:21:36.677 READ: bw=44.7MiB/s (46.9MB/s), 22.3MiB/s-22.4MiB/s (23.4MB/s-23.5MB/s), io=447MiB (469MB), run=10001-10001msec 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 00:21:36.677 real 0m11.136s 00:21:36.677 user 0m18.870s 00:21:36.677 sys 0m1.904s 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 ************************************ 00:21:36.677 END TEST fio_dif_1_multi_subsystems 00:21:36.677 ************************************ 00:21:36.677 14:59:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:36.677 14:59:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:36.677 14:59:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 ************************************ 00:21:36.677 START TEST fio_dif_rand_params 00:21:36.677 ************************************ 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 bdev_null0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.677 14:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.677 [2024-11-22 14:59:50.996557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.677 { 00:21:36.677 "params": { 00:21:36.677 "name": "Nvme$subsystem", 00:21:36.677 "trtype": "$TEST_TRANSPORT", 00:21:36.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.677 "adrfam": "ipv4", 00:21:36.677 "trsvcid": "$NVMF_PORT", 00:21:36.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.677 "hdgst": ${hdgst:-false}, 00:21:36.677 "ddgst": ${ddgst:-false} 00:21:36.677 }, 00:21:36.677 "method": "bdev_nvme_attach_controller" 00:21:36.677 } 00:21:36.677 EOF 00:21:36.677 )") 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.677 "params": { 00:21:36.677 "name": "Nvme0", 00:21:36.677 "trtype": "tcp", 00:21:36.677 "traddr": "10.0.0.3", 00:21:36.677 "adrfam": "ipv4", 00:21:36.677 "trsvcid": "4420", 00:21:36.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:36.677 "hdgst": false, 00:21:36.677 "ddgst": false 00:21:36.677 }, 00:21:36.677 "method": "bdev_nvme_attach_controller" 00:21:36.677 }' 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:36.677 14:59:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.677 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:36.677 ... 00:21:36.677 fio-3.35 00:21:36.678 Starting 3 threads 00:21:43.247 00:21:43.247 filename0: (groupid=0, jobs=1): err= 0: pid=83568: Fri Nov 22 14:59:56 2024 00:21:43.247 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(203MiB/5005msec) 00:21:43.247 slat (nsec): min=5593, max=59293, avg=12492.89, stdev=7422.72 00:21:43.247 clat (usec): min=8973, max=10096, avg=9218.65, stdev=147.89 00:21:43.247 lat (usec): min=8979, max=10117, avg=9231.14, stdev=148.83 00:21:43.247 clat percentiles (usec): 00:21:43.247 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:21:43.247 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9241], 00:21:43.247 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:43.247 | 99.00th=[ 9896], 99.50th=[ 9896], 99.90th=[10028], 99.95th=[10159], 00:21:43.247 | 99.99th=[10159] 00:21:43.247 bw ( KiB/s): min=40704, max=42240, per=33.37%, avg=41548.11, stdev=464.25, samples=9 00:21:43.247 iops : min= 318, max= 330, avg=324.56, stdev= 3.64, samples=9 00:21:43.247 lat (msec) : 10=99.75%, 20=0.25% 00:21:43.247 cpu : usr=95.02%, sys=4.46%, ctx=9, majf=0, minf=0 00:21:43.247 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:43.247 filename0: (groupid=0, jobs=1): err= 0: pid=83569: Fri Nov 22 14:59:56 2024 00:21:43.247 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(203MiB/5003msec) 00:21:43.247 slat (nsec): min=5919, max=61222, avg=11509.66, stdev=7047.72 00:21:43.247 clat (usec): min=6719, max=10030, avg=9218.36, stdev=187.79 00:21:43.247 lat (usec): min=6726, max=10050, avg=9229.87, stdev=188.13 00:21:43.247 clat percentiles (usec): 00:21:43.247 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:21:43.247 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9241], 00:21:43.247 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:43.247 | 99.00th=[ 9896], 99.50th=[ 9896], 99.90th=[10028], 99.95th=[10028], 00:21:43.247 | 99.99th=[10028] 00:21:43.247 bw ( KiB/s): min=40704, max=42240, per=33.37%, avg=41557.33, stdev=461.51, samples=9 00:21:43.247 iops : min= 318, max= 330, avg=324.67, stdev= 3.61, samples=9 00:21:43.247 lat (msec) : 10=99.75%, 20=0.25% 00:21:43.247 cpu : usr=93.50%, sys=5.96%, ctx=10, majf=0, minf=0 00:21:43.247 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:43.247 filename0: (groupid=0, jobs=1): err= 0: pid=83570: Fri Nov 22 14:59:56 2024 00:21:43.247 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(203MiB/5004msec) 00:21:43.247 slat (nsec): min=5937, max=44792, avg=9396.14, stdev=4514.71 00:21:43.247 clat (usec): min=5574, max=11187, avg=9224.98, stdev=232.46 00:21:43.247 lat (usec): min=5581, max=11216, avg=9234.38, stdev=232.95 00:21:43.247 clat percentiles (usec): 00:21:43.247 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9110], 00:21:43.247 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9241], 00:21:43.247 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:43.247 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[11207], 99.95th=[11207], 00:21:43.247 | 99.99th=[11207] 00:21:43.247 bw ( KiB/s): min=40704, max=42240, per=33.30%, avg=41472.00, stdev=384.00, samples=9 00:21:43.247 iops : min= 318, max= 330, avg=324.00, stdev= 3.00, samples=9 00:21:43.247 lat (msec) : 10=99.69%, 20=0.31% 00:21:43.247 cpu : usr=94.40%, sys=5.00%, ctx=18, majf=0, minf=0 00:21:43.247 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.247 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:43.247 00:21:43.247 Run status group 0 (all jobs): 00:21:43.247 READ: bw=122MiB/s (128MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=609MiB (638MB), run=5003-5005msec 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.247 14:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 bdev_null0 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.247 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 [2024-11-22 14:59:57.025469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 bdev_null1 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 bdev_null2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.248 { 00:21:43.248 "params": { 00:21:43.248 "name": "Nvme$subsystem", 00:21:43.248 "trtype": "$TEST_TRANSPORT", 00:21:43.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.248 "adrfam": "ipv4", 00:21:43.248 "trsvcid": "$NVMF_PORT", 00:21:43.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.248 "hdgst": ${hdgst:-false}, 00:21:43.248 "ddgst": ${ddgst:-false} 00:21:43.248 }, 00:21:43.248 "method": "bdev_nvme_attach_controller" 00:21:43.248 } 00:21:43.248 EOF 00:21:43.248 )") 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.248 { 00:21:43.248 "params": { 00:21:43.248 "name": "Nvme$subsystem", 00:21:43.248 "trtype": "$TEST_TRANSPORT", 00:21:43.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.248 "adrfam": "ipv4", 00:21:43.248 "trsvcid": "$NVMF_PORT", 00:21:43.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.248 "hdgst": ${hdgst:-false}, 00:21:43.248 "ddgst": ${ddgst:-false} 00:21:43.248 }, 00:21:43.248 "method": "bdev_nvme_attach_controller" 00:21:43.248 } 00:21:43.248 EOF 00:21:43.248 )") 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:43.248 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.248 { 00:21:43.248 "params": { 00:21:43.248 "name": "Nvme$subsystem", 00:21:43.248 "trtype": "$TEST_TRANSPORT", 00:21:43.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.248 "adrfam": "ipv4", 00:21:43.248 "trsvcid": "$NVMF_PORT", 00:21:43.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.249 "hdgst": ${hdgst:-false}, 00:21:43.249 "ddgst": ${ddgst:-false} 00:21:43.249 }, 00:21:43.249 "method": "bdev_nvme_attach_controller" 00:21:43.249 } 00:21:43.249 EOF 00:21:43.249 )") 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:43.249 "params": { 00:21:43.249 "name": "Nvme0", 00:21:43.249 "trtype": "tcp", 00:21:43.249 "traddr": "10.0.0.3", 00:21:43.249 "adrfam": "ipv4", 00:21:43.249 "trsvcid": "4420", 00:21:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:43.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:43.249 "hdgst": false, 00:21:43.249 "ddgst": false 00:21:43.249 }, 00:21:43.249 "method": "bdev_nvme_attach_controller" 00:21:43.249 },{ 00:21:43.249 "params": { 00:21:43.249 "name": "Nvme1", 00:21:43.249 "trtype": "tcp", 00:21:43.249 "traddr": "10.0.0.3", 00:21:43.249 "adrfam": "ipv4", 00:21:43.249 "trsvcid": "4420", 00:21:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.249 "hdgst": false, 00:21:43.249 "ddgst": false 00:21:43.249 }, 00:21:43.249 "method": "bdev_nvme_attach_controller" 00:21:43.249 },{ 00:21:43.249 "params": { 00:21:43.249 "name": "Nvme2", 00:21:43.249 "trtype": "tcp", 00:21:43.249 "traddr": "10.0.0.3", 00:21:43.249 "adrfam": "ipv4", 00:21:43.249 "trsvcid": "4420", 00:21:43.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.249 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.249 "hdgst": false, 00:21:43.249 "ddgst": false 00:21:43.249 }, 00:21:43.249 "method": "bdev_nvme_attach_controller" 00:21:43.249 }' 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:43.249 14:59:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:43.249 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:43.249 ... 00:21:43.249 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:43.249 ... 00:21:43.249 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:43.249 ... 00:21:43.249 fio-3.35 00:21:43.249 Starting 24 threads 00:21:55.460 00:21:55.460 filename0: (groupid=0, jobs=1): err= 0: pid=83669: Fri Nov 22 15:00:08 2024 00:21:55.460 read: IOPS=284, BW=1137KiB/s (1165kB/s)(11.1MiB/10014msec) 00:21:55.460 slat (usec): min=3, max=9068, avg=46.09, stdev=432.38 00:21:55.460 clat (msec): min=16, max=107, avg=56.09, stdev=15.59 00:21:55.460 lat (msec): min=16, max=107, avg=56.13, stdev=15.60 00:21:55.460 clat percentiles (msec): 00:21:55.460 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:21:55.460 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 61], 00:21:55.460 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 84], 00:21:55.460 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.460 | 99.99th=[ 108] 00:21:55.460 bw ( KiB/s): min= 816, max= 1520, per=4.32%, avg=1133.89, stdev=133.25, samples=19 00:21:55.460 iops : min= 204, max= 380, avg=283.47, stdev=33.31, samples=19 00:21:55.460 lat (msec) : 20=0.21%, 50=41.38%, 100=57.39%, 250=1.02% 00:21:55.460 cpu : usr=37.31%, sys=1.36%, ctx=1029, majf=0, minf=9 00:21:55.460 IO depths : 1=0.1%, 2=0.4%, 4=1.1%, 8=82.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:55.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.460 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.460 issued rwts: total=2847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.460 filename0: (groupid=0, jobs=1): err= 0: pid=83670: Fri Nov 22 15:00:08 2024 00:21:55.460 read: IOPS=267, BW=1071KiB/s (1096kB/s)(10.5MiB/10013msec) 00:21:55.460 slat (usec): min=4, max=8025, avg=32.03, stdev=292.55 00:21:55.460 clat (msec): min=14, max=138, avg=59.63, stdev=15.40 00:21:55.460 lat (msec): min=14, max=138, avg=59.66, stdev=15.41 00:21:55.460 clat percentiles (msec): 00:21:55.460 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 47], 00:21:55.460 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:21:55.460 | 70.00th=[ 67], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 88], 00:21:55.460 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 109], 99.95th=[ 109], 00:21:55.460 | 99.99th=[ 138] 00:21:55.460 bw ( KiB/s): min= 816, max= 1520, per=4.05%, avg=1062.32, stdev=140.59, samples=19 00:21:55.460 iops : min= 204, max= 380, avg=265.58, stdev=35.15, samples=19 00:21:55.460 lat (msec) : 20=0.26%, 50=30.37%, 100=68.32%, 250=1.04% 00:21:55.460 cpu : usr=37.08%, sys=1.31%, ctx=1340, majf=0, minf=9 00:21:55.460 IO depths : 1=0.1%, 2=1.4%, 4=5.3%, 8=77.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:55.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.460 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.460 issued rwts: total=2680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.460 filename0: (groupid=0, jobs=1): err= 0: pid=83671: Fri Nov 22 15:00:08 2024 00:21:55.460 read: IOPS=284, BW=1139KiB/s (1167kB/s)(11.1MiB/10003msec) 00:21:55.460 slat (usec): min=3, max=8067, avg=43.93, stdev=386.84 00:21:55.460 clat (usec): min=1780, max=119899, avg=55991.83, stdev=17876.78 00:21:55.460 lat (usec): min=1792, max=119910, avg=56035.76, stdev=17874.76 00:21:55.460 clat percentiles (msec): 00:21:55.460 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 41], 00:21:55.460 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 62], 00:21:55.460 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 77], 95.00th=[ 85], 00:21:55.461 | 99.00th=[ 100], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.461 | 99.99th=[ 121] 00:21:55.461 bw ( KiB/s): min= 784, max= 1634, per=4.24%, avg=1111.74, stdev=166.44, samples=19 00:21:55.461 iops : min= 196, max= 408, avg=277.79, stdev=41.55, samples=19 00:21:55.461 lat (msec) : 2=0.07%, 4=0.49%, 10=1.23%, 20=0.81%, 50=35.91% 00:21:55.461 lat (msec) : 100=60.79%, 250=0.70% 00:21:55.461 cpu : usr=40.82%, sys=1.40%, ctx=1260, majf=0, minf=9 00:21:55.461 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=83672: Fri Nov 22 15:00:08 2024 00:21:55.461 read: IOPS=287, BW=1149KiB/s (1176kB/s)(11.2MiB/10002msec) 00:21:55.461 slat (usec): min=3, max=12009, avg=42.94, stdev=425.35 00:21:55.461 clat (msec): min=2, max=107, avg=55.52, stdev=16.49 00:21:55.461 lat (msec): min=2, max=107, avg=55.56, stdev=16.47 00:21:55.461 clat percentiles (msec): 00:21:55.461 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 43], 00:21:55.461 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:21:55.461 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 84], 00:21:55.461 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.461 | 99.99th=[ 108] 00:21:55.461 bw ( KiB/s): min= 768, max= 1440, per=4.29%, avg=1125.89, stdev=129.27, samples=19 00:21:55.461 iops : min= 192, max= 360, avg=281.47, stdev=32.32, samples=19 00:21:55.461 lat (msec) : 4=0.63%, 10=0.59%, 20=0.28%, 50=40.25%, 100=57.35% 00:21:55.461 lat (msec) : 250=0.91% 00:21:55.461 cpu : usr=32.73%, sys=1.27%, ctx=992, majf=0, minf=9 00:21:55.461 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=83673: Fri Nov 22 15:00:08 2024 00:21:55.461 read: IOPS=273, BW=1093KiB/s (1119kB/s)(10.7MiB/10039msec) 00:21:55.461 slat (usec): min=6, max=4043, avg=26.73, stdev=183.50 00:21:55.461 clat (msec): min=9, max=132, avg=58.35, stdev=17.73 00:21:55.461 lat (msec): min=9, max=132, avg=58.37, stdev=17.72 00:21:55.461 clat percentiles (msec): 00:21:55.461 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 46], 00:21:55.461 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:21:55.461 | 70.00th=[ 67], 80.00th=[ 71], 90.00th=[ 80], 95.00th=[ 87], 00:21:55.461 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 131], 00:21:55.461 | 99.99th=[ 132] 00:21:55.461 bw ( KiB/s): min= 712, max= 2160, per=4.16%, avg=1090.80, stdev=267.63, samples=20 00:21:55.461 iops : min= 178, max= 540, avg=272.70, stdev=66.91, samples=20 00:21:55.461 lat (msec) : 10=0.51%, 20=2.84%, 50=26.54%, 100=68.98%, 250=1.13% 00:21:55.461 cpu : usr=39.62%, sys=1.29%, ctx=1267, majf=0, minf=9 00:21:55.461 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=79.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=88.5%, 8=10.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=83674: Fri Nov 22 15:00:08 2024 00:21:55.461 read: IOPS=278, BW=1113KiB/s (1140kB/s)(10.9MiB/10043msec) 00:21:55.461 slat (usec): min=4, max=8020, avg=27.26, stdev=280.44 00:21:55.461 clat (msec): min=2, max=107, avg=57.32, stdev=18.74 00:21:55.461 lat (msec): min=2, max=107, avg=57.35, stdev=18.74 00:21:55.461 clat percentiles (msec): 00:21:55.461 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 46], 00:21:55.461 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:21:55.461 | 70.00th=[ 67], 80.00th=[ 71], 90.00th=[ 78], 95.00th=[ 88], 00:21:55.461 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.461 | 99.99th=[ 108] 00:21:55.461 bw ( KiB/s): min= 768, max= 2560, per=4.24%, avg=1113.60, stdev=353.27, samples=20 00:21:55.461 iops : min= 192, max= 640, avg=278.40, stdev=88.32, samples=20 00:21:55.461 lat (msec) : 4=1.57%, 10=0.72%, 20=2.29%, 50=27.13%, 100=67.72% 00:21:55.461 lat (msec) : 250=0.57% 00:21:55.461 cpu : usr=37.74%, sys=1.21%, ctx=1148, majf=0, minf=0 00:21:55.461 IO depths : 1=0.1%, 2=0.9%, 4=3.1%, 8=79.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=83675: Fri Nov 22 15:00:08 2024 00:21:55.461 read: IOPS=278, BW=1114KiB/s (1141kB/s)(10.9MiB/10033msec) 00:21:55.461 slat (usec): min=3, max=8028, avg=34.72, stdev=306.43 00:21:55.461 clat (msec): min=13, max=120, avg=57.22, stdev=16.89 00:21:55.461 lat (msec): min=13, max=120, avg=57.26, stdev=16.90 00:21:55.461 clat percentiles (msec): 00:21:55.461 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 44], 00:21:55.461 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:21:55.461 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 88], 00:21:55.461 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 117], 99.95th=[ 121], 00:21:55.461 | 99.99th=[ 121] 00:21:55.461 bw ( KiB/s): min= 712, max= 1896, per=4.24%, avg=1113.26, stdev=218.52, samples=19 00:21:55.461 iops : min= 178, max= 474, avg=278.32, stdev=54.63, samples=19 00:21:55.461 lat (msec) : 20=1.04%, 50=33.56%, 100=64.72%, 250=0.68% 00:21:55.461 cpu : usr=39.22%, sys=1.26%, ctx=1258, majf=0, minf=9 00:21:55.461 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=83676: Fri Nov 22 15:00:08 2024 00:21:55.461 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.8MiB/10006msec) 00:21:55.461 slat (usec): min=4, max=8070, avg=36.22, stdev=320.59 00:21:55.461 clat (msec): min=13, max=119, avg=57.61, stdev=15.49 00:21:55.461 lat (msec): min=14, max=119, avg=57.65, stdev=15.49 00:21:55.461 clat percentiles (msec): 00:21:55.461 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:21:55.461 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 62], 00:21:55.461 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 85], 00:21:55.461 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 111], 99.95th=[ 111], 00:21:55.461 | 99.99th=[ 121] 00:21:55.461 bw ( KiB/s): min= 792, max= 1320, per=4.19%, avg=1101.00, stdev=116.11, samples=19 00:21:55.461 iops : min= 198, max= 330, avg=275.21, stdev=29.01, samples=19 00:21:55.461 lat (msec) : 20=0.32%, 50=36.35%, 100=62.49%, 250=0.83% 00:21:55.461 cpu : usr=38.08%, sys=1.23%, ctx=1076, majf=0, minf=9 00:21:55.461 IO depths : 1=0.1%, 2=0.4%, 4=1.1%, 8=82.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.461 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83677: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=274, BW=1097KiB/s (1123kB/s)(10.7MiB/10004msec) 00:21:55.462 slat (usec): min=4, max=8057, avg=36.78, stdev=342.23 00:21:55.462 clat (msec): min=6, max=107, avg=58.17, stdev=16.96 00:21:55.462 lat (msec): min=6, max=107, avg=58.20, stdev=16.96 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 46], 00:21:55.462 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 61], 00:21:55.462 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 87], 00:21:55.462 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.462 | 99.99th=[ 108] 00:21:55.462 bw ( KiB/s): min= 768, max= 1648, per=4.12%, avg=1081.05, stdev=163.36, samples=19 00:21:55.462 iops : min= 192, max= 412, avg=270.21, stdev=40.86, samples=19 00:21:55.462 lat (msec) : 10=0.47%, 20=0.44%, 50=34.22%, 100=63.96%, 250=0.91% 00:21:55.462 cpu : usr=32.29%, sys=0.96%, ctx=856, majf=0, minf=9 00:21:55.462 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 issued rwts: total=2744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83678: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=277, BW=1109KiB/s (1136kB/s)(10.8MiB/10007msec) 00:21:55.462 slat (usec): min=4, max=8044, avg=46.80, stdev=449.46 00:21:55.462 clat (msec): min=10, max=108, avg=57.47, stdev=16.43 00:21:55.462 lat (msec): min=10, max=108, avg=57.52, stdev=16.44 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 46], 00:21:55.462 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 61], 00:21:55.462 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 75], 95.00th=[ 85], 00:21:55.462 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 109], 00:21:55.462 | 99.99th=[ 109] 00:21:55.462 bw ( KiB/s): min= 768, max= 1776, per=4.19%, avg=1100.68, stdev=185.62, samples=19 00:21:55.462 iops : min= 192, max= 444, avg=275.16, stdev=46.41, samples=19 00:21:55.462 lat (msec) : 20=0.79%, 50=35.32%, 100=63.28%, 250=0.61% 00:21:55.462 cpu : usr=34.18%, sys=1.43%, ctx=931, majf=0, minf=9 00:21:55.462 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 issued rwts: total=2775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83679: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10044msec) 00:21:55.462 slat (usec): min=3, max=9031, avg=24.57, stdev=217.56 00:21:55.462 clat (msec): min=16, max=131, avg=58.06, stdev=15.85 00:21:55.462 lat (msec): min=16, max=131, avg=58.08, stdev=15.85 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 45], 00:21:55.462 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:21:55.462 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 79], 95.00th=[ 85], 00:21:55.462 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 108], 99.95th=[ 113], 00:21:55.462 | 99.99th=[ 132] 00:21:55.462 bw ( KiB/s): min= 736, max= 1656, per=4.18%, avg=1096.84, stdev=166.79, samples=19 00:21:55.462 iops : min= 184, max= 414, avg=274.21, stdev=41.70, samples=19 00:21:55.462 lat (msec) : 20=0.51%, 50=33.67%, 100=64.84%, 250=0.98% 00:21:55.462 cpu : usr=36.21%, sys=1.03%, ctx=1227, majf=0, minf=9 00:21:55.462 IO depths : 1=0.2%, 2=0.5%, 4=1.7%, 8=81.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 issued rwts: total=2762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83680: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=274, BW=1099KiB/s (1125kB/s)(10.8MiB/10034msec) 00:21:55.462 slat (usec): min=5, max=4053, avg=24.15, stdev=133.18 00:21:55.462 clat (msec): min=16, max=106, avg=58.10, stdev=16.22 00:21:55.462 lat (msec): min=16, max=106, avg=58.13, stdev=16.22 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 45], 00:21:55.462 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 63], 00:21:55.462 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 87], 00:21:55.462 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 00:21:55.462 | 99.99th=[ 107] 00:21:55.462 bw ( KiB/s): min= 800, max= 1777, per=4.18%, avg=1095.21, stdev=185.11, samples=19 00:21:55.462 iops : min= 200, max= 444, avg=273.79, stdev=46.23, samples=19 00:21:55.462 lat (msec) : 20=1.02%, 50=31.17%, 100=66.87%, 250=0.94% 00:21:55.462 cpu : usr=42.48%, sys=1.52%, ctx=1247, majf=0, minf=9 00:21:55.462 IO depths : 1=0.1%, 2=0.8%, 4=3.5%, 8=79.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83681: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10028msec) 00:21:55.462 slat (usec): min=3, max=12031, avg=30.51, stdev=305.62 00:21:55.462 clat (msec): min=16, max=121, avg=61.70, stdev=15.51 00:21:55.462 lat (msec): min=16, max=121, avg=61.73, stdev=15.51 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 48], 00:21:55.462 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 64], 00:21:55.462 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 88], 00:21:55.462 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 121], 00:21:55.462 | 99.99th=[ 122] 00:21:55.462 bw ( KiB/s): min= 768, max= 1539, per=3.91%, avg=1026.37, stdev=147.09, samples=19 00:21:55.462 iops : min= 192, max= 384, avg=256.53, stdev=36.65, samples=19 00:21:55.462 lat (msec) : 20=0.69%, 50=23.50%, 100=75.12%, 250=0.69% 00:21:55.462 cpu : usr=32.29%, sys=1.05%, ctx=912, majf=0, minf=9 00:21:55.462 IO depths : 1=0.2%, 2=1.3%, 4=4.7%, 8=77.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 complete : 0=0.0%, 4=89.1%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.462 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=83682: Fri Nov 22 15:00:08 2024 00:21:55.462 read: IOPS=266, BW=1066KiB/s (1092kB/s)(10.5MiB/10039msec) 00:21:55.462 slat (usec): min=4, max=4040, avg=20.67, stdev=110.27 00:21:55.462 clat (msec): min=13, max=119, avg=59.83, stdev=17.41 00:21:55.462 lat (msec): min=13, max=119, avg=59.85, stdev=17.41 00:21:55.462 clat percentiles (msec): 00:21:55.462 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 48], 00:21:55.462 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:21:55.462 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 88], 00:21:55.462 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 109], 99.95th=[ 110], 00:21:55.462 | 99.99th=[ 121] 00:21:55.462 bw ( KiB/s): min= 744, max= 2032, per=4.07%, avg=1066.80, stdev=242.63, samples=20 00:21:55.462 iops : min= 186, max= 508, avg=266.70, stdev=60.66, samples=20 00:21:55.462 lat (msec) : 20=3.06%, 50=23.69%, 100=72.16%, 250=1.08% 00:21:55.462 cpu : usr=34.52%, sys=1.14%, ctx=939, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=89.0%, 8=10.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename1: (groupid=0, jobs=1): err= 0: pid=83683: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10033msec) 00:21:55.463 slat (usec): min=3, max=11010, avg=34.25, stdev=338.37 00:21:55.463 clat (msec): min=16, max=112, avg=58.79, stdev=15.40 00:21:55.463 lat (msec): min=16, max=112, avg=58.82, stdev=15.40 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:21:55.463 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 62], 00:21:55.463 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 85], 00:21:55.463 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 112], 99.95th=[ 112], 00:21:55.463 | 99.99th=[ 113] 00:21:55.463 bw ( KiB/s): min= 768, max= 1547, per=4.12%, avg=1081.84, stdev=147.83, samples=19 00:21:55.463 iops : min= 192, max= 386, avg=270.42, stdev=36.83, samples=19 00:21:55.463 lat (msec) : 20=0.15%, 50=32.51%, 100=66.02%, 250=1.32% 00:21:55.463 cpu : usr=33.61%, sys=1.18%, ctx=973, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename1: (groupid=0, jobs=1): err= 0: pid=83684: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=277, BW=1109KiB/s (1135kB/s)(10.8MiB/10015msec) 00:21:55.463 slat (usec): min=3, max=8041, avg=32.04, stdev=281.37 00:21:55.463 clat (msec): min=15, max=104, avg=57.57, stdev=16.57 00:21:55.463 lat (msec): min=15, max=104, avg=57.60, stdev=16.57 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 44], 00:21:55.463 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:21:55.463 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 88], 00:21:55.463 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 105], 99.95th=[ 105], 00:21:55.463 | 99.99th=[ 105] 00:21:55.463 bw ( KiB/s): min= 768, max= 1680, per=4.19%, avg=1100.21, stdev=168.56, samples=19 00:21:55.463 iops : min= 192, max= 420, avg=275.05, stdev=42.14, samples=19 00:21:55.463 lat (msec) : 20=0.72%, 50=35.01%, 100=63.18%, 250=1.08% 00:21:55.463 cpu : usr=40.61%, sys=1.20%, ctx=1312, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=83685: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10026msec) 00:21:55.463 slat (usec): min=4, max=8061, avg=33.87, stdev=291.14 00:21:55.463 clat (msec): min=18, max=108, avg=58.01, stdev=15.49 00:21:55.463 lat (msec): min=18, max=108, avg=58.05, stdev=15.51 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 46], 00:21:55.463 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 61], 00:21:55.463 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 75], 95.00th=[ 85], 00:21:55.463 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:21:55.463 | 99.99th=[ 109] 00:21:55.463 bw ( KiB/s): min= 768, max= 1648, per=4.19%, avg=1099.79, stdev=164.00, samples=19 00:21:55.463 iops : min= 192, max= 412, avg=274.95, stdev=41.00, samples=19 00:21:55.463 lat (msec) : 20=0.07%, 50=33.99%, 100=64.93%, 250=1.02% 00:21:55.463 cpu : usr=36.74%, sys=1.02%, ctx=1047, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=83686: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=271, BW=1088KiB/s (1114kB/s)(10.7MiB/10037msec) 00:21:55.463 slat (usec): min=5, max=8061, avg=37.67, stdev=359.32 00:21:55.463 clat (msec): min=13, max=126, avg=58.64, stdev=17.27 00:21:55.463 lat (msec): min=13, max=126, avg=58.68, stdev=17.27 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 19], 5.00th=[ 27], 10.00th=[ 38], 20.00th=[ 46], 00:21:55.463 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 62], 00:21:55.463 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 92], 00:21:55.463 | 99.00th=[ 100], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 124], 00:21:55.463 | 99.99th=[ 127] 00:21:55.463 bw ( KiB/s): min= 760, max= 1904, per=4.14%, avg=1086.74, stdev=220.90, samples=19 00:21:55.463 iops : min= 190, max= 476, avg=271.68, stdev=55.23, samples=19 00:21:55.463 lat (msec) : 20=1.54%, 50=30.48%, 100=67.33%, 250=0.66% 00:21:55.463 cpu : usr=36.80%, sys=1.23%, ctx=1017, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=83687: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10047msec) 00:21:55.463 slat (usec): min=6, max=4056, avg=26.56, stdev=183.03 00:21:55.463 clat (msec): min=14, max=107, avg=59.58, stdev=17.07 00:21:55.463 lat (msec): min=14, max=107, avg=59.61, stdev=17.06 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 40], 20.00th=[ 47], 00:21:55.463 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:21:55.463 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 89], 00:21:55.463 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.463 | 99.99th=[ 108] 00:21:55.463 bw ( KiB/s): min= 736, max= 1936, per=4.08%, avg=1071.20, stdev=224.30, samples=20 00:21:55.463 iops : min= 184, max= 484, avg=267.80, stdev=56.07, samples=20 00:21:55.463 lat (msec) : 20=1.41%, 50=26.90%, 100=70.42%, 250=1.26% 00:21:55.463 cpu : usr=37.74%, sys=1.31%, ctx=1126, majf=0, minf=9 00:21:55.463 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=88.8%, 8=10.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=83688: Fri Nov 22 15:00:08 2024 00:21:55.463 read: IOPS=260, BW=1041KiB/s (1066kB/s)(10.2MiB/10028msec) 00:21:55.463 slat (usec): min=3, max=8061, avg=43.78, stdev=373.70 00:21:55.463 clat (msec): min=16, max=120, avg=61.22, stdev=15.20 00:21:55.463 lat (msec): min=16, max=120, avg=61.27, stdev=15.20 00:21:55.463 clat percentiles (msec): 00:21:55.463 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:21:55.463 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 64], 00:21:55.463 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 80], 95.00th=[ 90], 00:21:55.463 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:21:55.463 | 99.99th=[ 121] 00:21:55.463 bw ( KiB/s): min= 744, max= 1408, per=3.94%, avg=1034.53, stdev=131.83, samples=19 00:21:55.463 iops : min= 186, max= 352, avg=258.63, stdev=32.96, samples=19 00:21:55.463 lat (msec) : 20=0.08%, 50=25.71%, 100=73.10%, 250=1.11% 00:21:55.463 cpu : usr=38.15%, sys=1.19%, ctx=1245, majf=0, minf=9 00:21:55.463 IO depths : 1=0.2%, 2=2.1%, 4=7.5%, 8=75.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.463 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.464 filename2: (groupid=0, jobs=1): err= 0: pid=83689: Fri Nov 22 15:00:08 2024 00:21:55.464 read: IOPS=284, BW=1137KiB/s (1164kB/s)(11.1MiB/10005msec) 00:21:55.464 slat (usec): min=3, max=7973, avg=30.37, stdev=210.60 00:21:55.464 clat (msec): min=6, max=119, avg=56.14, stdev=16.24 00:21:55.464 lat (msec): min=6, max=119, avg=56.17, stdev=16.25 00:21:55.464 clat percentiles (msec): 00:21:55.464 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 43], 00:21:55.464 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 00:21:55.464 | 70.00th=[ 65], 80.00th=[ 69], 90.00th=[ 75], 95.00th=[ 84], 00:21:55.464 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 110], 99.95th=[ 110], 00:21:55.464 | 99.99th=[ 120] 00:21:55.464 bw ( KiB/s): min= 736, max= 1664, per=4.29%, avg=1125.74, stdev=166.90, samples=19 00:21:55.464 iops : min= 184, max= 416, avg=281.37, stdev=41.75, samples=19 00:21:55.464 lat (msec) : 10=0.21%, 20=0.32%, 50=38.92%, 100=59.74%, 250=0.81% 00:21:55.464 cpu : usr=43.26%, sys=1.34%, ctx=1431, majf=0, minf=9 00:21:55.464 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 issued rwts: total=2844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.464 filename2: (groupid=0, jobs=1): err= 0: pid=83690: Fri Nov 22 15:00:08 2024 00:21:55.464 read: IOPS=273, BW=1092KiB/s (1118kB/s)(10.7MiB/10045msec) 00:21:55.464 slat (usec): min=5, max=10035, avg=23.55, stdev=256.71 00:21:55.464 clat (usec): min=1224, max=122155, avg=58354.46, stdev=25163.98 00:21:55.464 lat (usec): min=1231, max=122163, avg=58378.01, stdev=25168.12 00:21:55.464 clat percentiles (usec): 00:21:55.464 | 1.00th=[ 1287], 5.00th=[ 2442], 10.00th=[ 16057], 20.00th=[ 46400], 00:21:55.464 | 30.00th=[ 56361], 40.00th=[ 60031], 50.00th=[ 62129], 60.00th=[ 66323], 00:21:55.464 | 70.00th=[ 70779], 80.00th=[ 76022], 90.00th=[ 85459], 95.00th=[ 95945], 00:21:55.464 | 99.00th=[105382], 99.50th=[107480], 99.90th=[120062], 99.95th=[121111], 00:21:55.464 | 99.99th=[122160] 00:21:55.464 bw ( KiB/s): min= 688, max= 3960, per=4.16%, avg=1090.40, stdev=681.36, samples=20 00:21:55.464 iops : min= 172, max= 990, avg=272.60, stdev=170.34, samples=20 00:21:55.464 lat (msec) : 2=4.67%, 4=2.77%, 10=0.66%, 20=2.99%, 50=14.55% 00:21:55.464 lat (msec) : 100=72.26%, 250=2.11% 00:21:55.464 cpu : usr=39.60%, sys=1.32%, ctx=1209, majf=0, minf=0 00:21:55.464 IO depths : 1=0.5%, 2=3.6%, 4=12.6%, 8=68.8%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 complete : 0=0.0%, 4=91.2%, 8=6.0%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.464 filename2: (groupid=0, jobs=1): err= 0: pid=83691: Fri Nov 22 15:00:08 2024 00:21:55.464 read: IOPS=278, BW=1116KiB/s (1142kB/s)(10.9MiB/10040msec) 00:21:55.464 slat (usec): min=6, max=8041, avg=28.88, stdev=264.17 00:21:55.464 clat (msec): min=13, max=114, avg=57.17, stdev=16.55 00:21:55.464 lat (msec): min=13, max=114, avg=57.20, stdev=16.56 00:21:55.464 clat percentiles (msec): 00:21:55.464 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 37], 20.00th=[ 43], 00:21:55.464 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 63], 00:21:55.464 | 70.00th=[ 65], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 86], 00:21:55.464 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:21:55.464 | 99.99th=[ 115] 00:21:55.464 bw ( KiB/s): min= 736, max= 1888, per=4.26%, avg=1116.40, stdev=205.91, samples=20 00:21:55.464 iops : min= 184, max= 472, avg=279.10, stdev=51.48, samples=20 00:21:55.464 lat (msec) : 20=1.75%, 50=31.89%, 100=65.39%, 250=0.96% 00:21:55.464 cpu : usr=48.54%, sys=1.64%, ctx=1446, majf=0, minf=9 00:21:55.464 IO depths : 1=0.1%, 2=0.7%, 4=2.2%, 8=80.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.464 filename2: (groupid=0, jobs=1): err= 0: pid=83692: Fri Nov 22 15:00:08 2024 00:21:55.464 read: IOPS=252, BW=1012KiB/s (1036kB/s)(9.92MiB/10039msec) 00:21:55.464 slat (usec): min=3, max=8052, avg=39.60, stdev=380.53 00:21:55.464 clat (msec): min=15, max=110, avg=63.00, stdev=16.34 00:21:55.464 lat (msec): min=15, max=110, avg=63.04, stdev=16.34 00:21:55.464 clat percentiles (msec): 00:21:55.464 | 1.00th=[ 19], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 50], 00:21:55.464 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 67], 00:21:55.464 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:21:55.464 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 111], 99.95th=[ 111], 00:21:55.464 | 99.99th=[ 111] 00:21:55.464 bw ( KiB/s): min= 768, max= 1648, per=3.86%, avg=1011.60, stdev=175.40, samples=20 00:21:55.464 iops : min= 192, max= 412, avg=252.90, stdev=43.85, samples=20 00:21:55.464 lat (msec) : 20=1.26%, 50=19.06%, 100=78.34%, 250=1.34% 00:21:55.464 cpu : usr=35.79%, sys=1.33%, ctx=1003, majf=0, minf=0 00:21:55.464 IO depths : 1=0.2%, 2=2.5%, 4=9.3%, 8=72.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 complete : 0=0.0%, 4=90.2%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.464 issued rwts: total=2539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:55.464 00:21:55.464 Run status group 0 (all jobs): 00:21:55.464 READ: bw=25.6MiB/s (26.9MB/s), 1012KiB/s-1149KiB/s (1036kB/s-1176kB/s), io=257MiB (270MB), run=10002-10047msec 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.464 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 bdev_null0 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 [2024-11-22 15:00:08.435106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 bdev_null1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.465 { 00:21:55.465 "params": { 00:21:55.465 "name": "Nvme$subsystem", 00:21:55.465 "trtype": "$TEST_TRANSPORT", 00:21:55.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.465 "adrfam": "ipv4", 00:21:55.465 "trsvcid": "$NVMF_PORT", 00:21:55.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.465 "hdgst": ${hdgst:-false}, 00:21:55.465 "ddgst": ${ddgst:-false} 00:21:55.465 }, 00:21:55.465 "method": "bdev_nvme_attach_controller" 00:21:55.465 } 00:21:55.465 EOF 00:21:55.465 )") 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:55.465 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:55.465 { 00:21:55.465 "params": { 00:21:55.465 "name": "Nvme$subsystem", 00:21:55.465 "trtype": "$TEST_TRANSPORT", 00:21:55.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.465 "adrfam": "ipv4", 00:21:55.465 "trsvcid": "$NVMF_PORT", 00:21:55.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.465 "hdgst": ${hdgst:-false}, 00:21:55.465 "ddgst": ${ddgst:-false} 00:21:55.465 }, 00:21:55.465 "method": "bdev_nvme_attach_controller" 00:21:55.465 } 00:21:55.465 EOF 00:21:55.465 )") 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:55.466 "params": { 00:21:55.466 "name": "Nvme0", 00:21:55.466 "trtype": "tcp", 00:21:55.466 "traddr": "10.0.0.3", 00:21:55.466 "adrfam": "ipv4", 00:21:55.466 "trsvcid": "4420", 00:21:55.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:55.466 "hdgst": false, 00:21:55.466 "ddgst": false 00:21:55.466 }, 00:21:55.466 "method": "bdev_nvme_attach_controller" 00:21:55.466 },{ 00:21:55.466 "params": { 00:21:55.466 "name": "Nvme1", 00:21:55.466 "trtype": "tcp", 00:21:55.466 "traddr": "10.0.0.3", 00:21:55.466 "adrfam": "ipv4", 00:21:55.466 "trsvcid": "4420", 00:21:55.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.466 "hdgst": false, 00:21:55.466 "ddgst": false 00:21:55.466 }, 00:21:55.466 "method": "bdev_nvme_attach_controller" 00:21:55.466 }' 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:55.466 15:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:55.466 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:55.466 ... 00:21:55.466 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:55.466 ... 00:21:55.466 fio-3.35 00:21:55.466 Starting 4 threads 00:21:59.660 00:21:59.660 filename0: (groupid=0, jobs=1): err= 0: pid=83837: Fri Nov 22 15:00:14 2024 00:21:59.660 read: IOPS=2451, BW=19.1MiB/s (20.1MB/s)(95.8MiB/5002msec) 00:21:59.660 slat (nsec): min=6268, max=84511, avg=19210.43, stdev=10623.39 00:21:59.660 clat (usec): min=692, max=6248, avg=3206.30, stdev=877.10 00:21:59.660 lat (usec): min=700, max=6280, avg=3225.51, stdev=877.60 00:21:59.660 clat percentiles (usec): 00:21:59.660 | 1.00th=[ 1139], 5.00th=[ 1778], 10.00th=[ 1876], 20.00th=[ 2212], 00:21:59.660 | 30.00th=[ 2671], 40.00th=[ 3163], 50.00th=[ 3425], 60.00th=[ 3654], 00:21:59.660 | 70.00th=[ 3818], 80.00th=[ 3982], 90.00th=[ 4178], 95.00th=[ 4293], 00:21:59.660 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 5735], 99.95th=[ 5997], 00:21:59.660 | 99.99th=[ 6128] 00:21:59.660 bw ( KiB/s): min=16656, max=22704, per=25.44%, avg=19488.00, stdev=2097.62, samples=9 00:21:59.660 iops : min= 2082, max= 2838, avg=2436.00, stdev=262.20, samples=9 00:21:59.660 lat (usec) : 750=0.03%, 1000=0.02% 00:21:59.660 lat (msec) : 2=12.93%, 4=67.40%, 10=19.62% 00:21:59.660 cpu : usr=94.94%, sys=4.14%, ctx=35, majf=0, minf=9 00:21:59.660 IO depths : 1=1.1%, 2=8.2%, 4=59.5%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.660 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.660 issued rwts: total=12261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:59.660 filename0: (groupid=0, jobs=1): err= 0: pid=83838: Fri Nov 22 15:00:14 2024 00:21:59.660 read: IOPS=2271, BW=17.7MiB/s (18.6MB/s)(88.8MiB/5001msec) 00:21:59.660 slat (nsec): min=3432, max=87876, avg=19903.32, stdev=11145.12 00:21:59.660 clat (usec): min=396, max=6512, avg=3452.40, stdev=816.56 00:21:59.660 lat (usec): min=407, max=6523, avg=3472.30, stdev=816.26 00:21:59.660 clat percentiles (usec): 00:21:59.660 | 1.00th=[ 1074], 5.00th=[ 1860], 10.00th=[ 2180], 20.00th=[ 2704], 00:21:59.661 | 30.00th=[ 3261], 40.00th=[ 3490], 50.00th=[ 3720], 60.00th=[ 3851], 00:21:59.661 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4424], 00:21:59.661 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5604], 99.95th=[ 5669], 00:21:59.661 | 99.99th=[ 5997] 00:21:59.661 bw ( KiB/s): min=16096, max=20112, per=24.03%, avg=18407.11, stdev=1500.47, samples=9 00:21:59.661 iops : min= 2012, max= 2514, avg=2300.89, stdev=187.56, samples=9 00:21:59.661 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.24% 00:21:59.661 lat (msec) : 2=6.28%, 4=68.38%, 10=25.08% 00:21:59.661 cpu : usr=94.44%, sys=4.66%, ctx=85, majf=0, minf=9 00:21:59.661 IO depths : 1=1.2%, 2=14.4%, 4=56.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 issued rwts: total=11361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:59.661 filename1: (groupid=0, jobs=1): err= 0: pid=83839: Fri Nov 22 15:00:14 2024 00:21:59.661 read: IOPS=2464, BW=19.3MiB/s (20.2MB/s)(96.3MiB/5002msec) 00:21:59.661 slat (nsec): min=3360, max=84207, avg=17184.01, stdev=10204.91 00:21:59.661 clat (usec): min=417, max=8162, avg=3194.14, stdev=862.77 00:21:59.661 lat (usec): min=429, max=8187, avg=3211.33, stdev=863.78 00:21:59.661 clat percentiles (usec): 00:21:59.661 | 1.00th=[ 1172], 5.00th=[ 1778], 10.00th=[ 1926], 20.00th=[ 2245], 00:21:59.661 | 30.00th=[ 2606], 40.00th=[ 3195], 50.00th=[ 3458], 60.00th=[ 3621], 00:21:59.661 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4113], 95.00th=[ 4228], 00:21:59.661 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5932], 99.95th=[ 7963], 00:21:59.661 | 99.99th=[ 8029] 00:21:59.661 bw ( KiB/s): min=17776, max=21552, per=25.75%, avg=19726.22, stdev=1303.69, samples=9 00:21:59.661 iops : min= 2222, max= 2694, avg=2465.78, stdev=162.96, samples=9 00:21:59.661 lat (usec) : 500=0.02%, 750=0.11%, 1000=0.19% 00:21:59.661 lat (msec) : 2=11.14%, 4=71.64%, 10=16.90% 00:21:59.661 cpu : usr=94.62%, sys=4.50%, ctx=10, majf=0, minf=0 00:21:59.661 IO depths : 1=0.8%, 2=8.3%, 4=59.4%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 issued rwts: total=12329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:59.661 filename1: (groupid=0, jobs=1): err= 0: pid=83840: Fri Nov 22 15:00:14 2024 00:21:59.661 read: IOPS=2387, BW=18.7MiB/s (19.6MB/s)(93.3MiB/5001msec) 00:21:59.661 slat (usec): min=3, max=178, avg=21.15, stdev=11.23 00:21:59.661 clat (usec): min=381, max=6217, avg=3282.28, stdev=829.89 00:21:59.661 lat (usec): min=391, max=6252, avg=3303.43, stdev=829.99 00:21:59.661 clat percentiles (usec): 00:21:59.661 | 1.00th=[ 1532], 5.00th=[ 1844], 10.00th=[ 2057], 20.00th=[ 2311], 00:21:59.661 | 30.00th=[ 2900], 40.00th=[ 3294], 50.00th=[ 3523], 60.00th=[ 3720], 00:21:59.661 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4293], 00:21:59.661 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5932], 99.95th=[ 5997], 00:21:59.661 | 99.99th=[ 6128] 00:21:59.661 bw ( KiB/s): min=17360, max=20384, per=24.70%, avg=18919.11, stdev=804.31, samples=9 00:21:59.661 iops : min= 2170, max= 2548, avg=2364.89, stdev=100.54, samples=9 00:21:59.661 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.12% 00:21:59.661 lat (msec) : 2=8.60%, 4=73.67%, 10=17.59% 00:21:59.661 cpu : usr=93.88%, sys=4.90%, ctx=30, majf=0, minf=10 00:21:59.661 IO depths : 1=1.3%, 2=10.3%, 4=58.2%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:59.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.661 issued rwts: total=11942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.661 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:59.661 00:21:59.661 Run status group 0 (all jobs): 00:21:59.661 READ: bw=74.8MiB/s (78.4MB/s), 17.7MiB/s-19.3MiB/s (18.6MB/s-20.2MB/s), io=374MiB (392MB), run=5001-5002msec 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 ************************************ 00:21:59.920 END TEST fio_dif_rand_params 00:21:59.920 ************************************ 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.920 00:21:59.920 real 0m23.539s 00:21:59.920 user 2m6.341s 00:21:59.920 sys 0m5.573s 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 15:00:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:59.920 15:00:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:59.920 15:00:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 ************************************ 00:21:59.920 START TEST fio_dif_digest 00:21:59.920 ************************************ 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:59.920 bdev_null0 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.920 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:00.179 [2024-11-22 15:00:14.602504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:00.179 { 00:22:00.179 "params": { 00:22:00.179 "name": "Nvme$subsystem", 00:22:00.179 "trtype": "$TEST_TRANSPORT", 00:22:00.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.179 "adrfam": "ipv4", 00:22:00.179 "trsvcid": "$NVMF_PORT", 00:22:00.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.179 "hdgst": ${hdgst:-false}, 00:22:00.179 "ddgst": ${ddgst:-false} 00:22:00.179 }, 00:22:00.179 "method": "bdev_nvme_attach_controller" 00:22:00.179 } 00:22:00.179 EOF 00:22:00.179 )") 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:00.179 "params": { 00:22:00.179 "name": "Nvme0", 00:22:00.179 "trtype": "tcp", 00:22:00.179 "traddr": "10.0.0.3", 00:22:00.179 "adrfam": "ipv4", 00:22:00.179 "trsvcid": "4420", 00:22:00.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:00.179 "hdgst": true, 00:22:00.179 "ddgst": true 00:22:00.179 }, 00:22:00.179 "method": "bdev_nvme_attach_controller" 00:22:00.179 }' 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:00.179 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:00.180 15:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:00.180 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:00.180 ... 00:22:00.180 fio-3.35 00:22:00.180 Starting 3 threads 00:22:12.386 00:22:12.386 filename0: (groupid=0, jobs=1): err= 0: pid=83946: Fri Nov 22 15:00:25 2024 00:22:12.386 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(330MiB/10008msec) 00:22:12.386 slat (nsec): min=6135, max=59522, avg=12982.51, stdev=7280.90 00:22:12.386 clat (usec): min=4450, max=21030, avg=11338.78, stdev=919.40 00:22:12.386 lat (usec): min=4459, max=21051, avg=11351.76, stdev=919.67 00:22:12.386 clat percentiles (usec): 00:22:12.387 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:22:12.387 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:22:12.387 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:22:12.387 | 99.00th=[17171], 99.50th=[17171], 99.90th=[21103], 99.95th=[21103], 00:22:12.387 | 99.99th=[21103] 00:22:12.387 bw ( KiB/s): min=29125, max=35328, per=33.38%, avg=33788.89, stdev=1266.20, samples=19 00:22:12.387 iops : min= 227, max= 276, avg=263.95, stdev=10.00, samples=19 00:22:12.387 lat (msec) : 10=0.19%, 20=99.70%, 50=0.11% 00:22:12.387 cpu : usr=94.62%, sys=4.83%, ctx=68, majf=0, minf=0 00:22:12.387 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 issued rwts: total=2640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:12.387 filename0: (groupid=0, jobs=1): err= 0: pid=83947: Fri Nov 22 15:00:25 2024 00:22:12.387 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10004msec) 00:22:12.387 slat (nsec): min=5545, max=62579, avg=9092.15, stdev=3576.91 00:22:12.387 clat (usec): min=10701, max=20562, avg=11357.85, stdev=887.68 00:22:12.387 lat (usec): min=10708, max=20574, avg=11366.94, stdev=887.69 00:22:12.387 clat percentiles (usec): 00:22:12.387 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:22:12.387 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:22:12.387 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:22:12.387 | 99.00th=[17171], 99.50th=[17171], 99.90th=[20579], 99.95th=[20579], 00:22:12.387 | 99.99th=[20579] 00:22:12.387 bw ( KiB/s): min=28416, max=35328, per=33.38%, avg=33792.00, stdev=1425.35, samples=19 00:22:12.387 iops : min= 222, max= 276, avg=264.00, stdev=11.14, samples=19 00:22:12.387 lat (msec) : 20=99.89%, 50=0.11% 00:22:12.387 cpu : usr=95.54%, sys=3.88%, ctx=43, majf=0, minf=0 00:22:12.387 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:12.387 filename0: (groupid=0, jobs=1): err= 0: pid=83948: Fri Nov 22 15:00:25 2024 00:22:12.387 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10005msec) 00:22:12.387 slat (nsec): min=6439, max=58735, avg=11885.09, stdev=7215.31 00:22:12.387 clat (usec): min=6447, max=17674, avg=11351.42, stdev=905.97 00:22:12.387 lat (usec): min=6455, max=17697, avg=11363.31, stdev=906.21 00:22:12.387 clat percentiles (usec): 00:22:12.387 | 1.00th=[10814], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:22:12.387 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11076], 00:22:12.387 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[12518], 00:22:12.387 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:22:12.387 | 99.99th=[17695] 00:22:12.387 bw ( KiB/s): min=28416, max=35328, per=33.34%, avg=33751.58, stdev=1413.20, samples=19 00:22:12.387 iops : min= 222, max= 276, avg=263.68, stdev=11.04, samples=19 00:22:12.387 lat (msec) : 10=0.11%, 20=99.89% 00:22:12.387 cpu : usr=95.25%, sys=3.90%, ctx=167, majf=0, minf=0 00:22:12.387 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.387 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:12.387 00:22:12.387 Run status group 0 (all jobs): 00:22:12.387 READ: bw=98.8MiB/s (104MB/s), 32.9MiB/s-33.0MiB/s (34.5MB/s-34.6MB/s), io=989MiB (1037MB), run=10004-10008msec 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.387 00:22:12.387 real 0m11.107s 00:22:12.387 user 0m29.250s 00:22:12.387 sys 0m1.576s 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.387 ************************************ 00:22:12.387 END TEST fio_dif_digest 00:22:12.387 ************************************ 00:22:12.387 15:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:12.387 15:00:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:12.387 15:00:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.387 rmmod nvme_tcp 00:22:12.387 rmmod nvme_fabrics 00:22:12.387 rmmod nvme_keyring 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83188 ']' 00:22:12.387 15:00:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83188 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83188 ']' 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83188 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83188 00:22:12.387 killing process with pid 83188 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83188' 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83188 00:22:12.387 15:00:25 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83188 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:12.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:12.387 Waiting for block devices as requested 00:22:12.387 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:12.387 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.387 15:00:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:12.387 15:00:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.387 15:00:26 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:22:12.387 ************************************ 00:22:12.387 END TEST nvmf_dif 00:22:12.387 ************************************ 00:22:12.387 00:22:12.387 real 0m59.869s 00:22:12.387 user 3m51.093s 00:22:12.387 sys 0m16.569s 00:22:12.387 15:00:26 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.387 15:00:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:12.387 15:00:26 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:12.387 15:00:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:12.387 15:00:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.387 15:00:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.387 ************************************ 00:22:12.387 START TEST nvmf_abort_qd_sizes 00:22:12.387 ************************************ 00:22:12.388 15:00:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:12.647 * Looking for test storage... 00:22:12.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.647 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.647 --rc genhtml_branch_coverage=1 00:22:12.647 --rc genhtml_function_coverage=1 00:22:12.648 --rc genhtml_legend=1 00:22:12.648 --rc geninfo_all_blocks=1 00:22:12.648 --rc geninfo_unexecuted_blocks=1 00:22:12.648 00:22:12.648 ' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.648 --rc genhtml_branch_coverage=1 00:22:12.648 --rc genhtml_function_coverage=1 00:22:12.648 --rc genhtml_legend=1 00:22:12.648 --rc geninfo_all_blocks=1 00:22:12.648 --rc geninfo_unexecuted_blocks=1 00:22:12.648 00:22:12.648 ' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.648 --rc genhtml_branch_coverage=1 00:22:12.648 --rc genhtml_function_coverage=1 00:22:12.648 --rc genhtml_legend=1 00:22:12.648 --rc geninfo_all_blocks=1 00:22:12.648 --rc geninfo_unexecuted_blocks=1 00:22:12.648 00:22:12.648 ' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.648 --rc genhtml_branch_coverage=1 00:22:12.648 --rc genhtml_function_coverage=1 00:22:12.648 --rc genhtml_legend=1 00:22:12.648 --rc geninfo_all_blocks=1 00:22:12.648 --rc geninfo_unexecuted_blocks=1 00:22:12.648 00:22:12.648 ' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:12.648 Cannot find device "nvmf_init_br" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:12.648 Cannot find device "nvmf_init_br2" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:12.648 Cannot find device "nvmf_tgt_br" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.648 Cannot find device "nvmf_tgt_br2" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:12.648 Cannot find device "nvmf_init_br" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:12.648 Cannot find device "nvmf_init_br2" 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:22:12.648 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:12.908 Cannot find device "nvmf_tgt_br" 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:12.908 Cannot find device "nvmf_tgt_br2" 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:12.908 Cannot find device "nvmf_br" 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:12.908 Cannot find device "nvmf_init_if" 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:12.908 Cannot find device "nvmf_init_if2" 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:12.908 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:13.167 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:13.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:22:13.168 00:22:13.168 --- 10.0.0.3 ping statistics --- 00:22:13.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.168 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:13.168 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:13.168 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:22:13.168 00:22:13.168 --- 10.0.0.4 ping statistics --- 00:22:13.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.168 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:13.168 00:22:13.168 --- 10.0.0.1 ping statistics --- 00:22:13.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.168 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:13.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:22:13.168 00:22:13.168 --- 10.0.0.2 ping statistics --- 00:22:13.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.168 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:13.168 15:00:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:13.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:13.996 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:13.996 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84600 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84600 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84600 ']' 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.996 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:13.996 [2024-11-22 15:00:28.610264] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:13.996 [2024-11-22 15:00:28.610591] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.254 [2024-11-22 15:00:28.764466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.254 [2024-11-22 15:00:28.827961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.255 [2024-11-22 15:00:28.828035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.255 [2024-11-22 15:00:28.828050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.255 [2024-11-22 15:00:28.828061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.255 [2024-11-22 15:00:28.828070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.255 [2024-11-22 15:00:28.829435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.255 [2024-11-22 15:00:28.829539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.255 [2024-11-22 15:00:28.829541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.255 [2024-11-22 15:00:28.829494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.255 [2024-11-22 15:00:28.898107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.514 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.514 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:22:14.514 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.514 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.514 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:14.514 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:22:14.515 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:22:14.515 15:00:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:14.515 15:00:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:14.515 15:00:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:14.515 15:00:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:14.515 ************************************ 00:22:14.515 START TEST spdk_target_abort 00:22:14.515 ************************************ 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:14.515 spdk_targetn1 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:14.515 [2024-11-22 15:00:29.147014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.515 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:14.773 [2024-11-22 15:00:29.186570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:14.773 15:00:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:18.064 Initializing NVMe Controllers 00:22:18.064 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:18.064 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:18.064 Initialization complete. Launching workers. 00:22:18.064 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9799, failed: 0 00:22:18.064 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1082, failed to submit 8717 00:22:18.064 success 814, unsuccessful 268, failed 0 00:22:18.064 15:00:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:18.064 15:00:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:21.353 Initializing NVMe Controllers 00:22:21.353 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:21.353 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:21.353 Initialization complete. Launching workers. 00:22:21.353 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:22:21.353 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7762 00:22:21.353 success 347, unsuccessful 867, failed 0 00:22:21.353 15:00:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:21.353 15:00:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:24.640 Initializing NVMe Controllers 00:22:24.640 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:24.640 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:24.640 Initialization complete. Launching workers. 00:22:24.640 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31514, failed: 0 00:22:24.640 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2352, failed to submit 29162 00:22:24.640 success 519, unsuccessful 1833, failed 0 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.640 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84600 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84600 ']' 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84600 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84600 00:22:25.207 killing process with pid 84600 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84600' 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84600 00:22:25.207 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84600 00:22:25.207 ************************************ 00:22:25.207 END TEST spdk_target_abort 00:22:25.207 ************************************ 00:22:25.208 00:22:25.208 real 0m10.740s 00:22:25.208 user 0m41.547s 00:22:25.208 sys 0m1.912s 00:22:25.208 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:25.208 15:00:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:25.208 15:00:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:25.208 15:00:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:25.208 15:00:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:25.208 15:00:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:25.466 ************************************ 00:22:25.466 START TEST kernel_target_abort 00:22:25.466 ************************************ 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:25.466 15:00:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:25.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:25.725 Waiting for block devices as requested 00:22:25.725 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:25.984 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:25.984 No valid GPT data, bailing 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:25.984 No valid GPT data, bailing 00:22:25.984 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:26.244 No valid GPT data, bailing 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:26.244 No valid GPT data, bailing 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:26.244 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 --hostid=b8aa9432-d384-4354-98be-2d5e1a66b801 -a 10.0.0.1 -t tcp -s 4420 00:22:26.245 00:22:26.245 Discovery Log Number of Records 2, Generation counter 2 00:22:26.245 =====Discovery Log Entry 0====== 00:22:26.245 trtype: tcp 00:22:26.245 adrfam: ipv4 00:22:26.245 subtype: current discovery subsystem 00:22:26.245 treq: not specified, sq flow control disable supported 00:22:26.245 portid: 1 00:22:26.245 trsvcid: 4420 00:22:26.245 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:26.245 traddr: 10.0.0.1 00:22:26.245 eflags: none 00:22:26.245 sectype: none 00:22:26.245 =====Discovery Log Entry 1====== 00:22:26.245 trtype: tcp 00:22:26.245 adrfam: ipv4 00:22:26.245 subtype: nvme subsystem 00:22:26.245 treq: not specified, sq flow control disable supported 00:22:26.245 portid: 1 00:22:26.245 trsvcid: 4420 00:22:26.245 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:26.245 traddr: 10.0.0.1 00:22:26.245 eflags: none 00:22:26.245 sectype: none 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:26.245 15:00:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:29.533 Initializing NVMe Controllers 00:22:29.533 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:29.533 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:29.533 Initialization complete. Launching workers. 00:22:29.533 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33443, failed: 0 00:22:29.533 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33443, failed to submit 0 00:22:29.533 success 0, unsuccessful 33443, failed 0 00:22:29.533 15:00:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:29.533 15:00:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:32.823 Initializing NVMe Controllers 00:22:32.823 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:32.823 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:32.823 Initialization complete. Launching workers. 00:22:32.823 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70191, failed: 0 00:22:32.823 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28978, failed to submit 41213 00:22:32.823 success 0, unsuccessful 28978, failed 0 00:22:32.823 15:00:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:32.823 15:00:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:36.111 Initializing NVMe Controllers 00:22:36.111 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:36.111 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:36.111 Initialization complete. Launching workers. 00:22:36.111 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77285, failed: 0 00:22:36.111 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19278, failed to submit 58007 00:22:36.111 success 0, unsuccessful 19278, failed 0 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:36.111 15:00:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:36.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:38.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.673 00:22:38.673 real 0m13.125s 00:22:38.673 user 0m5.920s 00:22:38.673 sys 0m4.580s 00:22:38.673 15:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.673 ************************************ 00:22:38.673 END TEST kernel_target_abort 00:22:38.673 ************************************ 00:22:38.673 15:00:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.673 rmmod nvme_tcp 00:22:38.673 rmmod nvme_fabrics 00:22:38.673 rmmod nvme_keyring 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.673 Process with pid 84600 is not found 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84600 ']' 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84600 00:22:38.673 15:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84600 ']' 00:22:38.674 15:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84600 00:22:38.674 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84600) - No such process 00:22:38.674 15:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84600 is not found' 00:22:38.674 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:38.674 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:38.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:38.931 Waiting for block devices as requested 00:22:38.931 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:39.190 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:39.190 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:39.449 15:00:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.449 15:00:54 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:39.449 00:22:39.449 real 0m27.035s 00:22:39.449 user 0m48.663s 00:22:39.449 sys 0m8.039s 00:22:39.449 15:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.449 ************************************ 00:22:39.449 15:00:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 END TEST nvmf_abort_qd_sizes 00:22:39.449 ************************************ 00:22:39.449 15:00:54 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:39.449 15:00:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:39.449 15:00:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.449 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:22:39.449 ************************************ 00:22:39.449 START TEST keyring_file 00:22:39.449 ************************************ 00:22:39.449 15:00:54 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:39.708 * Looking for test storage... 00:22:39.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:39.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.709 --rc genhtml_branch_coverage=1 00:22:39.709 --rc genhtml_function_coverage=1 00:22:39.709 --rc genhtml_legend=1 00:22:39.709 --rc geninfo_all_blocks=1 00:22:39.709 --rc geninfo_unexecuted_blocks=1 00:22:39.709 00:22:39.709 ' 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:39.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.709 --rc genhtml_branch_coverage=1 00:22:39.709 --rc genhtml_function_coverage=1 00:22:39.709 --rc genhtml_legend=1 00:22:39.709 --rc geninfo_all_blocks=1 00:22:39.709 --rc geninfo_unexecuted_blocks=1 00:22:39.709 00:22:39.709 ' 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:39.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.709 --rc genhtml_branch_coverage=1 00:22:39.709 --rc genhtml_function_coverage=1 00:22:39.709 --rc genhtml_legend=1 00:22:39.709 --rc geninfo_all_blocks=1 00:22:39.709 --rc geninfo_unexecuted_blocks=1 00:22:39.709 00:22:39.709 ' 00:22:39.709 15:00:54 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:39.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.709 --rc genhtml_branch_coverage=1 00:22:39.709 --rc genhtml_function_coverage=1 00:22:39.709 --rc genhtml_legend=1 00:22:39.709 --rc geninfo_all_blocks=1 00:22:39.709 --rc geninfo_unexecuted_blocks=1 00:22:39.709 00:22:39.709 ' 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.709 15:00:54 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.709 15:00:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.709 15:00:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.709 15:00:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.709 15:00:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:39.709 15:00:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:39.709 15:00:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SKj2MCBJST 00:22:39.709 15:00:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:39.709 15:00:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SKj2MCBJST 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SKj2MCBJST 00:22:39.968 15:00:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SKj2MCBJST 00:22:39.968 15:00:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WtGfLLau7y 00:22:39.968 15:00:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:39.968 15:00:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:39.969 15:00:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WtGfLLau7y 00:22:39.969 15:00:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WtGfLLau7y 00:22:39.969 15:00:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WtGfLLau7y 00:22:39.969 15:00:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=85513 00:22:39.969 15:00:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:39.969 15:00:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85513 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85513 ']' 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.969 15:00:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:39.969 [2024-11-22 15:00:54.514001] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:39.969 [2024-11-22 15:00:54.514512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85513 ] 00:22:40.227 [2024-11-22 15:00:54.668274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.227 [2024-11-22 15:00:54.732227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.227 [2024-11-22 15:00:54.840182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:40.486 15:00:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.486 15:00:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:40.486 15:00:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:40.486 15:00:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.486 15:00:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:40.486 [2024-11-22 15:00:55.121331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.486 null0 00:22:40.745 [2024-11-22 15:00:55.153312] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.745 [2024-11-22 15:00:55.153546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.745 15:00:55 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:40.745 [2024-11-22 15:00:55.181285] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:40.745 request: 00:22:40.745 { 00:22:40.745 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.745 "secure_channel": false, 00:22:40.745 "listen_address": { 00:22:40.745 "trtype": "tcp", 00:22:40.745 "traddr": "127.0.0.1", 00:22:40.745 "trsvcid": "4420" 00:22:40.745 }, 00:22:40.745 "method": "nvmf_subsystem_add_listener", 00:22:40.745 "req_id": 1 00:22:40.745 } 00:22:40.745 Got JSON-RPC error response 00:22:40.745 response: 00:22:40.745 { 00:22:40.745 "code": -32602, 00:22:40.745 "message": "Invalid parameters" 00:22:40.745 } 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.745 15:00:55 keyring_file -- keyring/file.sh@47 -- # bperfpid=85524 00:22:40.745 15:00:55 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:40.745 15:00:55 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85524 /var/tmp/bperf.sock 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85524 ']' 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.745 15:00:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:40.745 [2024-11-22 15:00:55.247684] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:40.745 [2024-11-22 15:00:55.247937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85524 ] 00:22:40.745 [2024-11-22 15:00:55.396026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.004 [2024-11-22 15:00:55.451192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.004 [2024-11-22 15:00:55.506555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:41.004 15:00:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.004 15:00:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:41.004 15:00:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:41.004 15:00:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:41.263 15:00:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WtGfLLau7y 00:22:41.263 15:00:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WtGfLLau7y 00:22:41.521 15:00:56 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:41.521 15:00:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:41.521 15:00:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:41.521 15:00:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:41.521 15:00:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:41.779 15:00:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SKj2MCBJST == \/\t\m\p\/\t\m\p\.\S\K\j\2\M\C\B\J\S\T ]] 00:22:41.779 15:00:56 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:41.779 15:00:56 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:41.779 15:00:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:41.779 15:00:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:41.779 15:00:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:42.038 15:00:56 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.WtGfLLau7y == \/\t\m\p\/\t\m\p\.\W\t\G\f\L\L\a\u\7\y ]] 00:22:42.038 15:00:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:42.038 15:00:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:42.038 15:00:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:42.038 15:00:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:42.038 15:00:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:42.038 15:00:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:42.296 15:00:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:42.296 15:00:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:42.296 15:00:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:42.296 15:00:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:42.296 15:00:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:42.296 15:00:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:42.296 15:00:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:42.554 15:00:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:42.554 15:00:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:42.554 15:00:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:42.814 [2024-11-22 15:00:57.339941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.814 nvme0n1 00:22:42.814 15:00:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:42.814 15:00:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:42.814 15:00:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:42.814 15:00:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:42.814 15:00:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:42.814 15:00:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:43.076 15:00:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:43.076 15:00:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:43.076 15:00:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:43.076 15:00:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:43.076 15:00:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:43.076 15:00:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:43.076 15:00:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:43.338 15:00:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:43.338 15:00:57 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.338 Running I/O for 1 seconds... 00:22:44.716 11922.00 IOPS, 46.57 MiB/s 00:22:44.716 Latency(us) 00:22:44.716 [2024-11-22T15:00:59.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.716 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:44.716 nvme0n1 : 1.01 11963.76 46.73 0.00 0.00 10663.22 5779.08 22520.55 00:22:44.716 [2024-11-22T15:00:59.381Z] =================================================================================================================== 00:22:44.716 [2024-11-22T15:00:59.381Z] Total : 11963.76 46.73 0.00 0.00 10663.22 5779.08 22520.55 00:22:44.716 { 00:22:44.716 "results": [ 00:22:44.716 { 00:22:44.716 "job": "nvme0n1", 00:22:44.716 "core_mask": "0x2", 00:22:44.716 "workload": "randrw", 00:22:44.716 "percentage": 50, 00:22:44.716 "status": "finished", 00:22:44.716 "queue_depth": 128, 00:22:44.716 "io_size": 4096, 00:22:44.716 "runtime": 1.007292, 00:22:44.716 "iops": 11963.760260182748, 00:22:44.716 "mibps": 46.73343851633886, 00:22:44.716 "io_failed": 0, 00:22:44.716 "io_timeout": 0, 00:22:44.716 "avg_latency_us": 10663.222743340802, 00:22:44.716 "min_latency_us": 5779.083636363636, 00:22:44.716 "max_latency_us": 22520.552727272727 00:22:44.716 } 00:22:44.716 ], 00:22:44.716 "core_count": 1 00:22:44.716 } 00:22:44.716 15:00:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:44.716 15:00:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:44.716 15:00:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:44.716 15:00:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:44.716 15:00:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:44.716 15:00:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:44.716 15:00:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:44.716 15:00:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:44.974 15:00:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:44.974 15:00:59 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:44.974 15:00:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:44.974 15:00:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:44.974 15:00:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:44.974 15:00:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:44.974 15:00:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:45.232 15:00:59 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:45.232 15:00:59 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:45.232 15:00:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.233 15:00:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:45.233 15:00:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:45.491 [2024-11-22 15:01:00.028646] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:45.491 [2024-11-22 15:01:00.029328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160c5d0 (107): Transport endpoint is not connected 00:22:45.491 [2024-11-22 15:01:00.030318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160c5d0 (9): Bad file descriptor 00:22:45.491 [2024-11-22 15:01:00.031315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:45.491 [2024-11-22 15:01:00.031338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:45.492 [2024-11-22 15:01:00.031347] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:45.492 [2024-11-22 15:01:00.031357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:45.492 request: 00:22:45.492 { 00:22:45.492 "name": "nvme0", 00:22:45.492 "trtype": "tcp", 00:22:45.492 "traddr": "127.0.0.1", 00:22:45.492 "adrfam": "ipv4", 00:22:45.492 "trsvcid": "4420", 00:22:45.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:45.492 "prchk_reftag": false, 00:22:45.492 "prchk_guard": false, 00:22:45.492 "hdgst": false, 00:22:45.492 "ddgst": false, 00:22:45.492 "psk": "key1", 00:22:45.492 "allow_unrecognized_csi": false, 00:22:45.492 "method": "bdev_nvme_attach_controller", 00:22:45.492 "req_id": 1 00:22:45.492 } 00:22:45.492 Got JSON-RPC error response 00:22:45.492 response: 00:22:45.492 { 00:22:45.492 "code": -5, 00:22:45.492 "message": "Input/output error" 00:22:45.492 } 00:22:45.492 15:01:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:45.492 15:01:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.492 15:01:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.492 15:01:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.492 15:01:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:45.492 15:01:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:45.492 15:01:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:45.492 15:01:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:45.492 15:01:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:45.492 15:01:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:45.751 15:01:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:45.751 15:01:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:45.751 15:01:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:45.751 15:01:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:45.751 15:01:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:45.751 15:01:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:45.751 15:01:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:46.010 15:01:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:46.010 15:01:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:46.010 15:01:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:46.269 15:01:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:46.269 15:01:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:46.528 15:01:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:46.528 15:01:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:46.528 15:01:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:46.787 15:01:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:46.787 15:01:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.SKj2MCBJST 00:22:46.787 15:01:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.787 15:01:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:46.787 15:01:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:47.047 [2024-11-22 15:01:01.505733] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SKj2MCBJST': 0100660 00:22:47.047 [2024-11-22 15:01:01.505767] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:47.047 request: 00:22:47.047 { 00:22:47.047 "name": "key0", 00:22:47.047 "path": "/tmp/tmp.SKj2MCBJST", 00:22:47.047 "method": "keyring_file_add_key", 00:22:47.047 "req_id": 1 00:22:47.047 } 00:22:47.047 Got JSON-RPC error response 00:22:47.047 response: 00:22:47.047 { 00:22:47.047 "code": -1, 00:22:47.047 "message": "Operation not permitted" 00:22:47.047 } 00:22:47.047 15:01:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:47.047 15:01:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.047 15:01:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.047 15:01:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.047 15:01:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.SKj2MCBJST 00:22:47.047 15:01:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:47.047 15:01:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SKj2MCBJST 00:22:47.305 15:01:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.SKj2MCBJST 00:22:47.305 15:01:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:47.305 15:01:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:47.305 15:01:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:47.305 15:01:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:47.305 15:01:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:47.305 15:01:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:47.563 15:01:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:47.563 15:01:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.563 15:01:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:47.563 15:01:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:47.822 [2024-11-22 15:01:02.309952] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SKj2MCBJST': No such file or directory 00:22:47.822 [2024-11-22 15:01:02.309987] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:47.822 [2024-11-22 15:01:02.310005] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:47.822 [2024-11-22 15:01:02.310014] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:47.822 [2024-11-22 15:01:02.310022] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:47.822 [2024-11-22 15:01:02.310029] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:47.822 request: 00:22:47.822 { 00:22:47.822 "name": "nvme0", 00:22:47.822 "trtype": "tcp", 00:22:47.822 "traddr": "127.0.0.1", 00:22:47.822 "adrfam": "ipv4", 00:22:47.822 "trsvcid": "4420", 00:22:47.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:47.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:47.822 "prchk_reftag": false, 00:22:47.822 "prchk_guard": false, 00:22:47.822 "hdgst": false, 00:22:47.822 "ddgst": false, 00:22:47.822 "psk": "key0", 00:22:47.822 "allow_unrecognized_csi": false, 00:22:47.822 "method": "bdev_nvme_attach_controller", 00:22:47.822 "req_id": 1 00:22:47.822 } 00:22:47.822 Got JSON-RPC error response 00:22:47.822 response: 00:22:47.822 { 00:22:47.822 "code": -19, 00:22:47.822 "message": "No such device" 00:22:47.822 } 00:22:47.822 15:01:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:47.822 15:01:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.822 15:01:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.822 15:01:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.822 15:01:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:47.822 15:01:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:48.081 15:01:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uAFd3pzbOT 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:48.081 15:01:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uAFd3pzbOT 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uAFd3pzbOT 00:22:48.081 15:01:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.uAFd3pzbOT 00:22:48.081 15:01:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uAFd3pzbOT 00:22:48.081 15:01:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uAFd3pzbOT 00:22:48.340 15:01:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:48.340 15:01:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:48.599 nvme0n1 00:22:48.599 15:01:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:48.599 15:01:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:48.599 15:01:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:48.599 15:01:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:48.600 15:01:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:48.600 15:01:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:48.859 15:01:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:48.859 15:01:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:48.859 15:01:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:49.118 15:01:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:49.118 15:01:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:49.118 15:01:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:49.118 15:01:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:49.118 15:01:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:49.378 15:01:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:49.378 15:01:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:49.378 15:01:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:49.378 15:01:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:49.378 15:01:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:49.378 15:01:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:49.378 15:01:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:49.378 15:01:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:49.378 15:01:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:49.378 15:01:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:49.637 15:01:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:49.637 15:01:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:49.637 15:01:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:50.207 15:01:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:50.207 15:01:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uAFd3pzbOT 00:22:50.207 15:01:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uAFd3pzbOT 00:22:50.207 15:01:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WtGfLLau7y 00:22:50.207 15:01:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WtGfLLau7y 00:22:50.466 15:01:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:50.466 15:01:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:51.035 nvme0n1 00:22:51.035 15:01:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:51.035 15:01:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:51.035 15:01:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:51.035 "subsystems": [ 00:22:51.035 { 00:22:51.035 "subsystem": "keyring", 00:22:51.035 "config": [ 00:22:51.035 { 00:22:51.035 "method": "keyring_file_add_key", 00:22:51.035 "params": { 00:22:51.035 "name": "key0", 00:22:51.035 "path": "/tmp/tmp.uAFd3pzbOT" 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "keyring_file_add_key", 00:22:51.035 "params": { 00:22:51.035 "name": "key1", 00:22:51.035 "path": "/tmp/tmp.WtGfLLau7y" 00:22:51.035 } 00:22:51.035 } 00:22:51.035 ] 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "subsystem": "iobuf", 00:22:51.035 "config": [ 00:22:51.035 { 00:22:51.035 "method": "iobuf_set_options", 00:22:51.035 "params": { 00:22:51.035 "small_pool_count": 8192, 00:22:51.035 "large_pool_count": 1024, 00:22:51.035 "small_bufsize": 8192, 00:22:51.035 "large_bufsize": 135168, 00:22:51.035 "enable_numa": false 00:22:51.035 } 00:22:51.035 } 00:22:51.035 ] 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "subsystem": "sock", 00:22:51.035 "config": [ 00:22:51.035 { 00:22:51.035 "method": "sock_set_default_impl", 00:22:51.035 "params": { 00:22:51.035 "impl_name": "uring" 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "sock_impl_set_options", 00:22:51.035 "params": { 00:22:51.035 "impl_name": "ssl", 00:22:51.035 "recv_buf_size": 4096, 00:22:51.035 "send_buf_size": 4096, 00:22:51.035 "enable_recv_pipe": true, 00:22:51.035 "enable_quickack": false, 00:22:51.035 "enable_placement_id": 0, 00:22:51.035 "enable_zerocopy_send_server": true, 00:22:51.035 "enable_zerocopy_send_client": false, 00:22:51.035 "zerocopy_threshold": 0, 00:22:51.035 "tls_version": 0, 00:22:51.035 "enable_ktls": false 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "sock_impl_set_options", 00:22:51.035 "params": { 00:22:51.035 "impl_name": "posix", 00:22:51.035 "recv_buf_size": 2097152, 00:22:51.035 "send_buf_size": 2097152, 00:22:51.035 "enable_recv_pipe": true, 00:22:51.035 "enable_quickack": false, 00:22:51.035 "enable_placement_id": 0, 00:22:51.035 "enable_zerocopy_send_server": true, 00:22:51.035 "enable_zerocopy_send_client": false, 00:22:51.035 "zerocopy_threshold": 0, 00:22:51.035 "tls_version": 0, 00:22:51.035 "enable_ktls": false 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "sock_impl_set_options", 00:22:51.035 "params": { 00:22:51.035 "impl_name": "uring", 00:22:51.035 "recv_buf_size": 2097152, 00:22:51.035 "send_buf_size": 2097152, 00:22:51.035 "enable_recv_pipe": true, 00:22:51.035 "enable_quickack": false, 00:22:51.035 "enable_placement_id": 0, 00:22:51.035 "enable_zerocopy_send_server": false, 00:22:51.035 "enable_zerocopy_send_client": false, 00:22:51.035 "zerocopy_threshold": 0, 00:22:51.035 "tls_version": 0, 00:22:51.035 "enable_ktls": false 00:22:51.035 } 00:22:51.035 } 00:22:51.035 ] 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "subsystem": "vmd", 00:22:51.035 "config": [] 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "subsystem": "accel", 00:22:51.035 "config": [ 00:22:51.035 { 00:22:51.035 "method": "accel_set_options", 00:22:51.035 "params": { 00:22:51.035 "small_cache_size": 128, 00:22:51.035 "large_cache_size": 16, 00:22:51.035 "task_count": 2048, 00:22:51.035 "sequence_count": 2048, 00:22:51.035 "buf_count": 2048 00:22:51.035 } 00:22:51.035 } 00:22:51.035 ] 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "subsystem": "bdev", 00:22:51.035 "config": [ 00:22:51.035 { 00:22:51.035 "method": "bdev_set_options", 00:22:51.035 "params": { 00:22:51.035 "bdev_io_pool_size": 65535, 00:22:51.035 "bdev_io_cache_size": 256, 00:22:51.035 "bdev_auto_examine": true, 00:22:51.035 "iobuf_small_cache_size": 128, 00:22:51.035 "iobuf_large_cache_size": 16 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "bdev_raid_set_options", 00:22:51.035 "params": { 00:22:51.035 "process_window_size_kb": 1024, 00:22:51.035 "process_max_bandwidth_mb_sec": 0 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "bdev_iscsi_set_options", 00:22:51.035 "params": { 00:22:51.035 "timeout_sec": 30 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.035 "method": "bdev_nvme_set_options", 00:22:51.035 "params": { 00:22:51.035 "action_on_timeout": "none", 00:22:51.035 "timeout_us": 0, 00:22:51.035 "timeout_admin_us": 0, 00:22:51.035 "keep_alive_timeout_ms": 10000, 00:22:51.035 "arbitration_burst": 0, 00:22:51.035 "low_priority_weight": 0, 00:22:51.035 "medium_priority_weight": 0, 00:22:51.035 "high_priority_weight": 0, 00:22:51.035 "nvme_adminq_poll_period_us": 10000, 00:22:51.035 "nvme_ioq_poll_period_us": 0, 00:22:51.035 "io_queue_requests": 512, 00:22:51.035 "delay_cmd_submit": true, 00:22:51.035 "transport_retry_count": 4, 00:22:51.035 "bdev_retry_count": 3, 00:22:51.035 "transport_ack_timeout": 0, 00:22:51.035 "ctrlr_loss_timeout_sec": 0, 00:22:51.035 "reconnect_delay_sec": 0, 00:22:51.035 "fast_io_fail_timeout_sec": 0, 00:22:51.035 "disable_auto_failback": false, 00:22:51.035 "generate_uuids": false, 00:22:51.035 "transport_tos": 0, 00:22:51.035 "nvme_error_stat": false, 00:22:51.035 "rdma_srq_size": 0, 00:22:51.035 "io_path_stat": false, 00:22:51.035 "allow_accel_sequence": false, 00:22:51.035 "rdma_max_cq_size": 0, 00:22:51.035 "rdma_cm_event_timeout_ms": 0, 00:22:51.035 "dhchap_digests": [ 00:22:51.035 "sha256", 00:22:51.035 "sha384", 00:22:51.035 "sha512" 00:22:51.035 ], 00:22:51.035 "dhchap_dhgroups": [ 00:22:51.035 "null", 00:22:51.035 "ffdhe2048", 00:22:51.035 "ffdhe3072", 00:22:51.035 "ffdhe4096", 00:22:51.035 "ffdhe6144", 00:22:51.035 "ffdhe8192" 00:22:51.035 ] 00:22:51.035 } 00:22:51.035 }, 00:22:51.035 { 00:22:51.036 "method": "bdev_nvme_attach_controller", 00:22:51.036 "params": { 00:22:51.036 "name": "nvme0", 00:22:51.036 "trtype": "TCP", 00:22:51.036 "adrfam": "IPv4", 00:22:51.036 "traddr": "127.0.0.1", 00:22:51.036 "trsvcid": "4420", 00:22:51.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:51.036 "prchk_reftag": false, 00:22:51.036 "prchk_guard": false, 00:22:51.036 "ctrlr_loss_timeout_sec": 0, 00:22:51.036 "reconnect_delay_sec": 0, 00:22:51.036 "fast_io_fail_timeout_sec": 0, 00:22:51.036 "psk": "key0", 00:22:51.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:51.036 "hdgst": false, 00:22:51.036 "ddgst": false, 00:22:51.036 "multipath": "multipath" 00:22:51.036 } 00:22:51.036 }, 00:22:51.036 { 00:22:51.036 "method": "bdev_nvme_set_hotplug", 00:22:51.036 "params": { 00:22:51.036 "period_us": 100000, 00:22:51.036 "enable": false 00:22:51.036 } 00:22:51.036 }, 00:22:51.036 { 00:22:51.036 "method": "bdev_wait_for_examine" 00:22:51.036 } 00:22:51.036 ] 00:22:51.036 }, 00:22:51.036 { 00:22:51.036 "subsystem": "nbd", 00:22:51.036 "config": [] 00:22:51.036 } 00:22:51.036 ] 00:22:51.036 }' 00:22:51.036 15:01:05 keyring_file -- keyring/file.sh@115 -- # killprocess 85524 00:22:51.036 15:01:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85524 ']' 00:22:51.036 15:01:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85524 00:22:51.036 15:01:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:51.036 15:01:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85524 00:22:51.295 killing process with pid 85524 00:22:51.295 Received shutdown signal, test time was about 1.000000 seconds 00:22:51.295 00:22:51.295 Latency(us) 00:22:51.295 [2024-11-22T15:01:05.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.295 [2024-11-22T15:01:05.960Z] =================================================================================================================== 00:22:51.295 [2024-11-22T15:01:05.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85524' 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@973 -- # kill 85524 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@978 -- # wait 85524 00:22:51.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:51.295 15:01:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=85761 00:22:51.295 15:01:05 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:51.295 15:01:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85761 /var/tmp/bperf.sock 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85761 ']' 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.295 15:01:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:51.295 15:01:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:51.295 "subsystems": [ 00:22:51.295 { 00:22:51.295 "subsystem": "keyring", 00:22:51.295 "config": [ 00:22:51.295 { 00:22:51.295 "method": "keyring_file_add_key", 00:22:51.295 "params": { 00:22:51.295 "name": "key0", 00:22:51.295 "path": "/tmp/tmp.uAFd3pzbOT" 00:22:51.295 } 00:22:51.295 }, 00:22:51.295 { 00:22:51.295 "method": "keyring_file_add_key", 00:22:51.295 "params": { 00:22:51.295 "name": "key1", 00:22:51.295 "path": "/tmp/tmp.WtGfLLau7y" 00:22:51.295 } 00:22:51.295 } 00:22:51.295 ] 00:22:51.295 }, 00:22:51.295 { 00:22:51.295 "subsystem": "iobuf", 00:22:51.295 "config": [ 00:22:51.295 { 00:22:51.295 "method": "iobuf_set_options", 00:22:51.295 "params": { 00:22:51.295 "small_pool_count": 8192, 00:22:51.295 "large_pool_count": 1024, 00:22:51.295 "small_bufsize": 8192, 00:22:51.295 "large_bufsize": 135168, 00:22:51.295 "enable_numa": false 00:22:51.295 } 00:22:51.295 } 00:22:51.295 ] 00:22:51.295 }, 00:22:51.295 { 00:22:51.295 "subsystem": "sock", 00:22:51.295 "config": [ 00:22:51.295 { 00:22:51.295 "method": "sock_set_default_impl", 00:22:51.295 "params": { 00:22:51.295 "impl_name": "uring" 00:22:51.295 } 00:22:51.295 }, 00:22:51.295 { 00:22:51.295 "method": "sock_impl_set_options", 00:22:51.295 "params": { 00:22:51.295 "impl_name": "ssl", 00:22:51.295 "recv_buf_size": 4096, 00:22:51.295 "send_buf_size": 4096, 00:22:51.295 "enable_recv_pipe": true, 00:22:51.295 "enable_quickack": false, 00:22:51.296 "enable_placement_id": 0, 00:22:51.296 "enable_zerocopy_send_server": true, 00:22:51.296 "enable_zerocopy_send_client": false, 00:22:51.296 "zerocopy_threshold": 0, 00:22:51.296 "tls_version": 0, 00:22:51.296 "enable_ktls": false 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "sock_impl_set_options", 00:22:51.296 "params": { 00:22:51.296 "impl_name": "posix", 00:22:51.296 "recv_buf_size": 2097152, 00:22:51.296 "send_buf_size": 2097152, 00:22:51.296 "enable_recv_pipe": true, 00:22:51.296 "enable_quickack": false, 00:22:51.296 "enable_placement_id": 0, 00:22:51.296 "enable_zerocopy_send_server": true, 00:22:51.296 "enable_zerocopy_send_client": false, 00:22:51.296 "zerocopy_threshold": 0, 00:22:51.296 "tls_version": 0, 00:22:51.296 "enable_ktls": false 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "sock_impl_set_options", 00:22:51.296 "params": { 00:22:51.296 "impl_name": "uring", 00:22:51.296 "recv_buf_size": 2097152, 00:22:51.296 "send_buf_size": 2097152, 00:22:51.296 "enable_recv_pipe": true, 00:22:51.296 "enable_quickack": false, 00:22:51.296 "enable_placement_id": 0, 00:22:51.296 "enable_zerocopy_send_server": false, 00:22:51.296 "enable_zerocopy_send_client": false, 00:22:51.296 "zerocopy_threshold": 0, 00:22:51.296 "tls_version": 0, 00:22:51.296 "enable_ktls": false 00:22:51.296 } 00:22:51.296 } 00:22:51.296 ] 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "subsystem": "vmd", 00:22:51.296 "config": [] 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "subsystem": "accel", 00:22:51.296 "config": [ 00:22:51.296 { 00:22:51.296 "method": "accel_set_options", 00:22:51.296 "params": { 00:22:51.296 "small_cache_size": 128, 00:22:51.296 "large_cache_size": 16, 00:22:51.296 "task_count": 2048, 00:22:51.296 "sequence_count": 2048, 00:22:51.296 "buf_count": 2048 00:22:51.296 } 00:22:51.296 } 00:22:51.296 ] 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "subsystem": "bdev", 00:22:51.296 "config": [ 00:22:51.296 { 00:22:51.296 "method": "bdev_set_options", 00:22:51.296 "params": { 00:22:51.296 "bdev_io_pool_size": 65535, 00:22:51.296 "bdev_io_cache_size": 256, 00:22:51.296 "bdev_auto_examine": true, 00:22:51.296 "iobuf_small_cache_size": 128, 00:22:51.296 "iobuf_large_cache_size": 16 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_raid_set_options", 00:22:51.296 "params": { 00:22:51.296 "process_window_size_kb": 1024, 00:22:51.296 "process_max_bandwidth_mb_sec": 0 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_iscsi_set_options", 00:22:51.296 "params": { 00:22:51.296 "timeout_sec": 30 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_nvme_set_options", 00:22:51.296 "params": { 00:22:51.296 "action_on_timeout": "none", 00:22:51.296 "timeout_us": 0, 00:22:51.296 "timeout_admin_us": 0, 00:22:51.296 "keep_alive_timeout_ms": 10000, 00:22:51.296 "arbitration_burst": 0, 00:22:51.296 "low_priority_weight": 0, 00:22:51.296 "medium_priority_weight": 0, 00:22:51.296 "high_priority_weight": 0, 00:22:51.296 "nvme_adminq_poll_period_us": 10000, 00:22:51.296 "nvme_ioq_poll_period_us": 0, 00:22:51.296 "io_queue_requests": 512, 00:22:51.296 "delay_cmd_submit": true, 00:22:51.296 "transport_retry_count": 4, 00:22:51.296 "bdev_retry_count": 3, 00:22:51.296 "transport_ack_timeout": 0, 00:22:51.296 "ctrlr_loss_timeout_sec": 0, 00:22:51.296 "reconnect_delay_sec": 0, 00:22:51.296 "fast_io_fail_timeout_sec": 0, 00:22:51.296 "disable_auto_failback": false, 00:22:51.296 "generate_uuids": false, 00:22:51.296 "transport_tos": 0, 00:22:51.296 "nvme_error_stat": false, 00:22:51.296 "rdma_srq_size": 0, 00:22:51.296 "io_path_stat": false, 00:22:51.296 "allow_accel_sequence": false, 00:22:51.296 "rdma_max_cq_size": 0, 00:22:51.296 "rdma_cm_event_timeout_ms": 0, 00:22:51.296 "dhchap_digests": [ 00:22:51.296 "sha256", 00:22:51.296 "sha384", 00:22:51.296 "sha512" 00:22:51.296 ], 00:22:51.296 "dhchap_dhgroups": [ 00:22:51.296 "null", 00:22:51.296 "ffdhe2048", 00:22:51.296 "ffdhe3072", 00:22:51.296 "ffdhe4096", 00:22:51.296 "ffdhe6144", 00:22:51.296 "ffdhe8192" 00:22:51.296 ] 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_nvme_attach_controller", 00:22:51.296 "params": { 00:22:51.296 "name": "nvme0", 00:22:51.296 "trtype": "TCP", 00:22:51.296 "adrfam": "IPv4", 00:22:51.296 "traddr": "127.0.0.1", 00:22:51.296 "trsvcid": "4420", 00:22:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:51.296 "prchk_reftag": false, 00:22:51.296 "prchk_guard": false, 00:22:51.296 "ctrlr_loss_timeout_sec": 0, 00:22:51.296 "reconnect_delay_sec": 0, 00:22:51.296 "fast_io_fail_timeout_sec": 0, 00:22:51.296 "psk": "key0", 00:22:51.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:51.296 "hdgst": false, 00:22:51.296 "ddgst": false, 00:22:51.296 "multipath": "multipath" 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_nvme_set_hotplug", 00:22:51.296 "params": { 00:22:51.296 "period_us": 100000, 00:22:51.296 "enable": false 00:22:51.296 } 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "method": "bdev_wait_for_examine" 00:22:51.296 } 00:22:51.296 ] 00:22:51.296 }, 00:22:51.296 { 00:22:51.296 "subsystem": "nbd", 00:22:51.296 "config": [] 00:22:51.296 } 00:22:51.296 ] 00:22:51.296 }' 00:22:51.296 15:01:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.296 15:01:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:51.296 [2024-11-22 15:01:05.943136] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:51.296 [2024-11-22 15:01:05.943570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85761 ] 00:22:51.555 [2024-11-22 15:01:06.080109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.555 [2024-11-22 15:01:06.127649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.814 [2024-11-22 15:01:06.260516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:51.814 [2024-11-22 15:01:06.313335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.382 15:01:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.382 15:01:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:52.382 15:01:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:52.382 15:01:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:52.382 15:01:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:52.641 15:01:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:52.641 15:01:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:52.641 15:01:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:52.641 15:01:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:52.641 15:01:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:52.641 15:01:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:52.641 15:01:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:52.900 15:01:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:52.900 15:01:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:52.900 15:01:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:52.900 15:01:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:52.900 15:01:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:52.900 15:01:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:52.900 15:01:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:53.159 15:01:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:53.159 15:01:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:53.159 15:01:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:53.159 15:01:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:53.418 15:01:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:53.418 15:01:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:53.418 15:01:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.uAFd3pzbOT /tmp/tmp.WtGfLLau7y 00:22:53.418 15:01:07 keyring_file -- keyring/file.sh@20 -- # killprocess 85761 00:22:53.418 15:01:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85761 ']' 00:22:53.418 15:01:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85761 00:22:53.418 15:01:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:53.418 15:01:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.418 15:01:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85761 00:22:53.418 killing process with pid 85761 00:22:53.418 Received shutdown signal, test time was about 1.000000 seconds 00:22:53.418 00:22:53.418 Latency(us) 00:22:53.418 [2024-11-22T15:01:08.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.419 [2024-11-22T15:01:08.084Z] =================================================================================================================== 00:22:53.419 [2024-11-22T15:01:08.084Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.419 15:01:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.419 15:01:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.419 15:01:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85761' 00:22:53.419 15:01:07 keyring_file -- common/autotest_common.sh@973 -- # kill 85761 00:22:53.419 15:01:07 keyring_file -- common/autotest_common.sh@978 -- # wait 85761 00:22:53.678 15:01:08 keyring_file -- keyring/file.sh@21 -- # killprocess 85513 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85513 ']' 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85513 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85513 00:22:53.678 killing process with pid 85513 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85513' 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@973 -- # kill 85513 00:22:53.678 15:01:08 keyring_file -- common/autotest_common.sh@978 -- # wait 85513 00:22:54.247 00:22:54.247 real 0m14.567s 00:22:54.247 user 0m36.163s 00:22:54.247 sys 0m3.053s 00:22:54.247 15:01:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.247 15:01:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:54.247 ************************************ 00:22:54.247 END TEST keyring_file 00:22:54.247 ************************************ 00:22:54.247 15:01:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:22:54.247 15:01:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:54.247 15:01:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.247 15:01:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.247 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.247 ************************************ 00:22:54.247 START TEST keyring_linux 00:22:54.247 ************************************ 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:54.247 Joined session keyring: 437800276 00:22:54.247 * Looking for test storage... 00:22:54.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.247 15:01:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.247 --rc genhtml_branch_coverage=1 00:22:54.247 --rc genhtml_function_coverage=1 00:22:54.247 --rc genhtml_legend=1 00:22:54.247 --rc geninfo_all_blocks=1 00:22:54.247 --rc geninfo_unexecuted_blocks=1 00:22:54.247 00:22:54.247 ' 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.247 --rc genhtml_branch_coverage=1 00:22:54.247 --rc genhtml_function_coverage=1 00:22:54.247 --rc genhtml_legend=1 00:22:54.247 --rc geninfo_all_blocks=1 00:22:54.247 --rc geninfo_unexecuted_blocks=1 00:22:54.247 00:22:54.247 ' 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.247 --rc genhtml_branch_coverage=1 00:22:54.247 --rc genhtml_function_coverage=1 00:22:54.247 --rc genhtml_legend=1 00:22:54.247 --rc geninfo_all_blocks=1 00:22:54.247 --rc geninfo_unexecuted_blocks=1 00:22:54.247 00:22:54.247 ' 00:22:54.247 15:01:08 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.247 --rc genhtml_branch_coverage=1 00:22:54.247 --rc genhtml_function_coverage=1 00:22:54.247 --rc genhtml_legend=1 00:22:54.247 --rc geninfo_all_blocks=1 00:22:54.247 --rc geninfo_unexecuted_blocks=1 00:22:54.247 00:22:54.247 ' 00:22:54.247 15:01:08 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:54.247 15:01:08 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.247 15:01:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=b8aa9432-d384-4354-98be-2d5e1a66b801 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.248 15:01:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.248 15:01:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.248 15:01:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.248 15:01:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.248 15:01:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.248 15:01:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.248 15:01:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.248 15:01:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:54.248 15:01:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:54.248 15:01:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:54.248 15:01:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:54.248 15:01:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:54.507 15:01:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:54.507 /tmp/:spdk-test:key0 00:22:54.507 15:01:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:54.507 15:01:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:54.507 15:01:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:54.507 15:01:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:54.508 15:01:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:54.508 15:01:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:54.508 /tmp/:spdk-test:key1 00:22:54.508 15:01:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85887 00:22:54.508 15:01:08 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:54.508 15:01:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85887 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85887 ']' 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.508 15:01:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:54.508 [2024-11-22 15:01:09.075356] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:54.508 [2024-11-22 15:01:09.075673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85887 ] 00:22:54.772 [2024-11-22 15:01:09.226488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.772 [2024-11-22 15:01:09.288788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.772 [2024-11-22 15:01:09.392000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:55.030 15:01:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.030 15:01:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:55.030 15:01:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:55.030 15:01:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.030 15:01:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:55.030 [2024-11-22 15:01:09.639130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.030 null0 00:22:55.030 [2024-11-22 15:01:09.671089] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.031 [2024-11-22 15:01:09.671485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:55.031 15:01:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.031 15:01:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:55.291 78599806 00:22:55.291 15:01:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:55.291 970184705 00:22:55.291 15:01:09 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:55.291 15:01:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85899 00:22:55.291 15:01:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85899 /var/tmp/bperf.sock 00:22:55.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85899 ']' 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.291 15:01:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:55.291 [2024-11-22 15:01:09.755596] Starting SPDK v25.01-pre git sha1 1e70ad0e1 / DPDK 24.03.0 initialization... 00:22:55.291 [2024-11-22 15:01:09.755866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85899 ] 00:22:55.291 [2024-11-22 15:01:09.907696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.549 [2024-11-22 15:01:09.962169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.116 15:01:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.116 15:01:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:56.116 15:01:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:56.116 15:01:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:56.375 15:01:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:56.375 15:01:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:56.635 [2024-11-22 15:01:11.185976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.635 15:01:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:56.635 15:01:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:56.894 [2024-11-22 15:01:11.442110] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.894 nvme0n1 00:22:56.894 15:01:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:56.894 15:01:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:56.894 15:01:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:56.894 15:01:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:56.894 15:01:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:56.894 15:01:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.153 15:01:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:57.153 15:01:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:57.153 15:01:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:57.153 15:01:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:57.153 15:01:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:57.153 15:01:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:57.153 15:01:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@25 -- # sn=78599806 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 78599806 == \7\8\5\9\9\8\0\6 ]] 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 78599806 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:57.721 15:01:12 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.721 Running I/O for 1 seconds... 00:22:58.656 14873.00 IOPS, 58.10 MiB/s 00:22:58.656 Latency(us) 00:22:58.656 [2024-11-22T15:01:13.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.656 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:58.656 nvme0n1 : 1.01 14875.68 58.11 0.00 0.00 8563.20 5153.51 13702.98 00:22:58.656 [2024-11-22T15:01:13.321Z] =================================================================================================================== 00:22:58.656 [2024-11-22T15:01:13.321Z] Total : 14875.68 58.11 0.00 0.00 8563.20 5153.51 13702.98 00:22:58.656 { 00:22:58.656 "results": [ 00:22:58.656 { 00:22:58.656 "job": "nvme0n1", 00:22:58.656 "core_mask": "0x2", 00:22:58.656 "workload": "randread", 00:22:58.656 "status": "finished", 00:22:58.656 "queue_depth": 128, 00:22:58.656 "io_size": 4096, 00:22:58.656 "runtime": 1.008492, 00:22:58.656 "iops": 14875.675761433904, 00:22:58.656 "mibps": 58.10810844310119, 00:22:58.656 "io_failed": 0, 00:22:58.656 "io_timeout": 0, 00:22:58.656 "avg_latency_us": 8563.200031026166, 00:22:58.656 "min_latency_us": 5153.512727272728, 00:22:58.656 "max_latency_us": 13702.981818181817 00:22:58.656 } 00:22:58.656 ], 00:22:58.656 "core_count": 1 00:22:58.656 } 00:22:58.656 15:01:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:58.656 15:01:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:58.915 15:01:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:58.915 15:01:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:58.915 15:01:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:58.915 15:01:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:58.915 15:01:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:58.915 15:01:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:59.175 15:01:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:59.175 15:01:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:59.175 15:01:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:59.175 15:01:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.175 15:01:13 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:59.175 15:01:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:59.434 [2024-11-22 15:01:14.035097] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.434 [2024-11-22 15:01:14.035804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8fb30 (107): Transport endpoint is not connected 00:22:59.434 [2024-11-22 15:01:14.036790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8fb30 (9): Bad file descriptor 00:22:59.434 [2024-11-22 15:01:14.037787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:59.434 [2024-11-22 15:01:14.037955] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:59.434 [2024-11-22 15:01:14.037971] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:59.435 [2024-11-22 15:01:14.037982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:59.435 request: 00:22:59.435 { 00:22:59.435 "name": "nvme0", 00:22:59.435 "trtype": "tcp", 00:22:59.435 "traddr": "127.0.0.1", 00:22:59.435 "adrfam": "ipv4", 00:22:59.435 "trsvcid": "4420", 00:22:59.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:59.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:59.435 "prchk_reftag": false, 00:22:59.435 "prchk_guard": false, 00:22:59.435 "hdgst": false, 00:22:59.435 "ddgst": false, 00:22:59.435 "psk": ":spdk-test:key1", 00:22:59.435 "allow_unrecognized_csi": false, 00:22:59.435 "method": "bdev_nvme_attach_controller", 00:22:59.435 "req_id": 1 00:22:59.435 } 00:22:59.435 Got JSON-RPC error response 00:22:59.435 response: 00:22:59.435 { 00:22:59.435 "code": -5, 00:22:59.435 "message": "Input/output error" 00:22:59.435 } 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@33 -- # sn=78599806 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 78599806 00:22:59.435 1 links removed 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@33 -- # sn=970184705 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 970184705 00:22:59.435 1 links removed 00:22:59.435 15:01:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85899 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85899 ']' 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85899 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.435 15:01:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85899 00:22:59.694 killing process with pid 85899 00:22:59.694 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.694 00:22:59.694 Latency(us) 00:22:59.694 [2024-11-22T15:01:14.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.694 [2024-11-22T15:01:14.359Z] =================================================================================================================== 00:22:59.694 [2024-11-22T15:01:14.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85899' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 85899 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 85899 00:22:59.694 15:01:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85887 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85887 ']' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85887 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85887 00:22:59.694 killing process with pid 85887 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85887' 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 85887 00:22:59.694 15:01:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 85887 00:23:00.263 00:23:00.263 real 0m6.113s 00:23:00.263 user 0m11.714s 00:23:00.263 sys 0m1.675s 00:23:00.263 ************************************ 00:23:00.263 END TEST keyring_linux 00:23:00.263 ************************************ 00:23:00.263 15:01:14 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.263 15:01:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:00.263 15:01:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:00.263 15:01:14 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:00.263 15:01:14 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:00.263 15:01:14 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:00.263 15:01:14 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:00.263 15:01:14 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:00.263 15:01:14 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:00.263 15:01:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.263 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:23:00.263 15:01:14 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:00.263 15:01:14 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:00.263 15:01:14 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:00.263 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:23:02.167 INFO: APP EXITING 00:23:02.167 INFO: killing all VMs 00:23:02.167 INFO: killing vhost app 00:23:02.167 INFO: EXIT DONE 00:23:03.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:03.104 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:03.104 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:03.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:03.931 Cleaning 00:23:03.931 Removing: /var/run/dpdk/spdk0/config 00:23:03.931 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:03.931 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:03.931 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:03.931 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:03.931 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:03.931 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:03.931 Removing: /var/run/dpdk/spdk1/config 00:23:03.931 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:03.931 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:03.931 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:03.931 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:03.931 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:03.931 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:03.931 Removing: /var/run/dpdk/spdk2/config 00:23:03.931 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:03.931 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:03.931 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:03.931 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:03.931 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:03.931 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:03.931 Removing: /var/run/dpdk/spdk3/config 00:23:03.931 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:03.931 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:03.931 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:03.931 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:03.931 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:03.931 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:03.931 Removing: /var/run/dpdk/spdk4/config 00:23:03.931 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:03.931 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:03.931 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:03.931 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:03.931 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:03.931 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:03.931 Removing: /dev/shm/nvmf_trace.0 00:23:03.931 Removing: /dev/shm/spdk_tgt_trace.pid56870 00:23:03.931 Removing: /var/run/dpdk/spdk0 00:23:03.931 Removing: /var/run/dpdk/spdk1 00:23:03.931 Removing: /var/run/dpdk/spdk2 00:23:03.931 Removing: /var/run/dpdk/spdk3 00:23:03.931 Removing: /var/run/dpdk/spdk4 00:23:03.931 Removing: /var/run/dpdk/spdk_pid56717 00:23:03.931 Removing: /var/run/dpdk/spdk_pid56870 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57074 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57155 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57184 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57292 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57303 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57442 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57643 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57792 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57870 00:23:03.931 Removing: /var/run/dpdk/spdk_pid57946 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58040 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58123 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58156 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58187 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58261 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58355 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58810 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58849 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58898 00:23:03.931 Removing: /var/run/dpdk/spdk_pid58906 00:23:04.190 Removing: /var/run/dpdk/spdk_pid58974 00:23:04.190 Removing: /var/run/dpdk/spdk_pid58982 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59055 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59063 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59113 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59125 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59165 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59183 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59330 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59360 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59442 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59782 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59794 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59836 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59849 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59865 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59889 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59903 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59924 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59943 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59962 00:23:04.190 Removing: /var/run/dpdk/spdk_pid59983 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60002 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60016 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60031 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60050 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60069 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60092 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60111 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60130 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60151 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60187 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60206 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60231 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60303 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60337 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60352 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60381 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60390 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60405 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60448 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60461 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60495 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60505 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60514 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60529 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60539 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60550 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60563 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60573 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60607 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60633 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60647 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60677 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60687 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60699 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60742 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60759 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60786 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60798 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60806 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60819 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60826 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60834 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60847 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60854 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60939 00:23:04.190 Removing: /var/run/dpdk/spdk_pid60987 00:23:04.190 Removing: /var/run/dpdk/spdk_pid61115 00:23:04.190 Removing: /var/run/dpdk/spdk_pid61144 00:23:04.190 Removing: /var/run/dpdk/spdk_pid61189 00:23:04.190 Removing: /var/run/dpdk/spdk_pid61209 00:23:04.190 Removing: /var/run/dpdk/spdk_pid61231 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61251 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61288 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61304 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61382 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61409 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61458 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61544 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61611 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61640 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61745 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61782 00:23:04.449 Removing: /var/run/dpdk/spdk_pid61820 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62052 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62155 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62189 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62213 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62252 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62286 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62319 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62356 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62762 00:23:04.449 Removing: /var/run/dpdk/spdk_pid62806 00:23:04.449 Removing: /var/run/dpdk/spdk_pid63156 00:23:04.449 Removing: /var/run/dpdk/spdk_pid63618 00:23:04.449 Removing: /var/run/dpdk/spdk_pid63888 00:23:04.449 Removing: /var/run/dpdk/spdk_pid64736 00:23:04.449 Removing: /var/run/dpdk/spdk_pid65646 00:23:04.449 Removing: /var/run/dpdk/spdk_pid65765 00:23:04.450 Removing: /var/run/dpdk/spdk_pid65835 00:23:04.450 Removing: /var/run/dpdk/spdk_pid67256 00:23:04.450 Removing: /var/run/dpdk/spdk_pid67569 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71188 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71544 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71653 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71787 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71820 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71851 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71885 00:23:04.450 Removing: /var/run/dpdk/spdk_pid71965 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72093 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72242 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72324 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72519 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72600 00:23:04.450 Removing: /var/run/dpdk/spdk_pid72693 00:23:04.450 Removing: /var/run/dpdk/spdk_pid73059 00:23:04.450 Removing: /var/run/dpdk/spdk_pid73490 00:23:04.450 Removing: /var/run/dpdk/spdk_pid73491 00:23:04.450 Removing: /var/run/dpdk/spdk_pid73492 00:23:04.450 Removing: /var/run/dpdk/spdk_pid73750 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74014 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74410 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74412 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74739 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74754 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74778 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74805 00:23:04.450 Removing: /var/run/dpdk/spdk_pid74810 00:23:04.450 Removing: /var/run/dpdk/spdk_pid75174 00:23:04.450 Removing: /var/run/dpdk/spdk_pid75227 00:23:04.450 Removing: /var/run/dpdk/spdk_pid75559 00:23:04.450 Removing: /var/run/dpdk/spdk_pid75762 00:23:04.450 Removing: /var/run/dpdk/spdk_pid76194 00:23:04.450 Removing: /var/run/dpdk/spdk_pid76749 00:23:04.450 Removing: /var/run/dpdk/spdk_pid77627 00:23:04.450 Removing: /var/run/dpdk/spdk_pid78252 00:23:04.450 Removing: /var/run/dpdk/spdk_pid78258 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80258 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80319 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80379 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80433 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80533 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80586 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80646 00:23:04.450 Removing: /var/run/dpdk/spdk_pid80693 00:23:04.450 Removing: /var/run/dpdk/spdk_pid81058 00:23:04.450 Removing: /var/run/dpdk/spdk_pid82263 00:23:04.450 Removing: /var/run/dpdk/spdk_pid82396 00:23:04.450 Removing: /var/run/dpdk/spdk_pid82643 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83242 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83405 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83564 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83661 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83833 00:23:04.709 Removing: /var/run/dpdk/spdk_pid83942 00:23:04.709 Removing: /var/run/dpdk/spdk_pid84649 00:23:04.709 Removing: /var/run/dpdk/spdk_pid84680 00:23:04.709 Removing: /var/run/dpdk/spdk_pid84715 00:23:04.709 Removing: /var/run/dpdk/spdk_pid84972 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85008 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85040 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85513 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85524 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85761 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85887 00:23:04.709 Removing: /var/run/dpdk/spdk_pid85899 00:23:04.709 Clean 00:23:04.709 15:01:19 -- common/autotest_common.sh@1453 -- # return 0 00:23:04.709 15:01:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:04.709 15:01:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.709 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.709 15:01:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:04.709 15:01:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.709 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.709 15:01:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:04.709 15:01:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:04.709 15:01:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:04.709 15:01:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:04.709 15:01:19 -- spdk/autotest.sh@398 -- # hostname 00:23:04.709 15:01:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:04.972 geninfo: WARNING: invalid characters removed from testname! 00:23:31.569 15:01:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:31.569 15:01:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:33.471 15:01:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:36.003 15:01:50 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:38.538 15:01:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:40.443 15:01:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:42.977 15:01:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:42.977 15:01:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:42.977 15:01:57 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:42.977 15:01:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:42.977 15:01:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:42.977 15:01:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:42.977 + [[ -n 5364 ]] 00:23:42.977 + sudo kill 5364 00:23:42.985 [Pipeline] } 00:23:42.998 [Pipeline] // timeout 00:23:43.002 [Pipeline] } 00:23:43.015 [Pipeline] // stage 00:23:43.019 [Pipeline] } 00:23:43.031 [Pipeline] // catchError 00:23:43.040 [Pipeline] stage 00:23:43.042 [Pipeline] { (Stop VM) 00:23:43.053 [Pipeline] sh 00:23:43.329 + vagrant halt 00:23:46.615 ==> default: Halting domain... 00:23:53.194 [Pipeline] sh 00:23:53.471 + vagrant destroy -f 00:23:56.758 ==> default: Removing domain... 00:23:56.770 [Pipeline] sh 00:23:57.052 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:57.061 [Pipeline] } 00:23:57.075 [Pipeline] // stage 00:23:57.082 [Pipeline] } 00:23:57.096 [Pipeline] // dir 00:23:57.101 [Pipeline] } 00:23:57.116 [Pipeline] // wrap 00:23:57.122 [Pipeline] } 00:23:57.134 [Pipeline] // catchError 00:23:57.145 [Pipeline] stage 00:23:57.147 [Pipeline] { (Epilogue) 00:23:57.161 [Pipeline] sh 00:23:57.449 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:02.731 [Pipeline] catchError 00:24:02.733 [Pipeline] { 00:24:02.747 [Pipeline] sh 00:24:03.030 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:03.030 Artifacts sizes are good 00:24:03.039 [Pipeline] } 00:24:03.053 [Pipeline] // catchError 00:24:03.066 [Pipeline] archiveArtifacts 00:24:03.074 Archiving artifacts 00:24:03.213 [Pipeline] cleanWs 00:24:03.228 [WS-CLEANUP] Deleting project workspace... 00:24:03.228 [WS-CLEANUP] Deferred wipeout is used... 00:24:03.258 [WS-CLEANUP] done 00:24:03.260 [Pipeline] } 00:24:03.276 [Pipeline] // stage 00:24:03.282 [Pipeline] } 00:24:03.296 [Pipeline] // node 00:24:03.302 [Pipeline] End of Pipeline 00:24:03.341 Finished: SUCCESS